Harland commited on
Commit
8e6682e
·
verified ·
1 Parent(s): 6a0a870

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +105 -3
README.md CHANGED
@@ -1,3 +1,105 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: default
4
+ data_files:
5
+ - split: dev
6
+ path: "dev.jsonl"
7
+ license: apache-2.0
8
+ ---
9
+ # [DCASE 2026 Challenge] Task 5 Development Set: Audio-Dependent Question Answering (ADQA)
10
+
11
+ <div align="center">
12
+
13
+ [![DCASE 2026](https://img.shields.io/badge/DCASE%202026-Task%205%20Dev%20Set-red.svg)](https://dcase.community/challenge2026/index#task5)
14
+ [![Paper](https://img.shields.io/badge/Paper-ICLR%202026-b31b1b.svg)](https://arxiv.org/abs/2509.21060)
15
+ [![Training Set](https://img.shields.io/badge/Training%20Set-AudioMCQ--StrongAC--GeminiCoT-yellow.svg)](https://huggingface.co/datasets/AudioMCQ-StrongAC-GeminiCoT)
16
+
17
+ </div>
18
+
19
+ This is the official **Development Set** for [DCASE 2026 Challenge Task 5: Audio-Dependent Question Answering (ADQA)](https://dcase.community/challenge2026/index#task5).
20
+
21
+ The ADQA task focuses on addressing **"Textual Hallucination"** in Large Audio-Language Models (LALMs) — where models pass audio understanding benchmarks by relying on text prompts and internal linguistic priors rather than actual audio perception. ADQA introduces a rigorous evaluation framework using **Audio-Dependency Filtering (ADF)** to ensure questions cannot be answered through common sense or text-only reasoning.
22
+
23
+ ## Audio-Dependency Filtering (ADF)
24
+
25
+ All samples in this development set undergo a rigorous four-step ADF hard-filtering process to guarantee genuine audio dependence:
26
+
27
+ 1. **Silent Audio Filtering:** Questions solvable by LALMs without audio are removed.
28
+ 2. **LLM Common-sense Check:** Ensures no external knowledge alone can solve the question.
29
+ 3. **Perplexity-based Soft Filtering:** Eliminates samples with text-based statistical shortcuts.
30
+ 4. **Manual Verification:** Final human-in-the-loop check for ground-truth accuracy.
31
+
32
+ ## Statistics
33
+
34
+ | Metric | Count |
35
+ |--------|-------|
36
+ | Total Samples | 1,607 |
37
+ | Unique Audio Files | 1,607 |
38
+
39
+ ### Data Sources
40
+
41
+ The development set is composed of two parts:
42
+
43
+ - **Existing Benchmarks:** A portion of the samples is derived from established audio understanding benchmarks, including [MMAU](https://github.com/sakshi113/mmau), [MMAR](https://github.com/ddlBoJack/MMAR), and [MMSU](https://huggingface.co/datasets/ddwang2000/MMSU). These samples cover a wide range of audio understanding tasks such as speech, music, and sound perception.
44
+ - **Human-Annotated Questions:** The remaining majority consists of newly constructed, human-annotated multiple-choice questions based on diverse audio sources, designed to further challenge models on real-world audio comprehension.
45
+
46
+ All samples undergo the four-step **Audio-Dependency Filtering (ADF)** process described above.
47
+
48
+ ## Directory Structure
49
+
50
+ ```text
51
+ DCASE2026-Task5-DevSet/
52
+ ├── dev.jsonl # Main data file (1,607 samples, shuffled)
53
+ ├── dev_audios/ # Audio files (1,607 .wav files)
54
+ └── README.md
55
+ ```
56
+
57
+ ## Data Format
58
+
59
+ Each entry in `dev.jsonl` is a JSON object with the following fields:
60
+
61
+ | Field | Type | Description |
62
+ |-------|------|-------------|
63
+ | `id` | string | Unique sample identifier (e.g., `dev_0001`) |
64
+ | `audio_path` | string | Relative path to audio file |
65
+ | `question_text` | string | Question text |
66
+ | `answer` | string | Correct answer |
67
+ | `multi_choice` | list[string] | Answer choices |
68
+
69
+ ### Example
70
+
71
+ ```json
72
+ {
73
+ "id": "dev_0001",
74
+ "audio_path": "dev_audios/dev_0001.wav",
75
+ "question_text": "What is the speaker's primary emotion in this audio?",
76
+ "answer": "Happiness",
77
+ "multi_choice": ["Sadness", "Happiness", "Anger", "Fear"]
78
+ }
79
+ ```
80
+
81
+ ## Submission Format
82
+
83
+ The system output file should be a `.csv` file with the following two columns:
84
+
85
+ | Column | Description |
86
+ |--------|-------------|
87
+ | `question` | The question ID (e.g., `dev_0001`) |
88
+ | `answer` | The system's answer, must match one of the given choices |
89
+
90
+ ## License
91
+
92
+ This dataset is distributed under the **Apache-2.0** license.
93
+
94
+ ## Citation
95
+
96
+ If you use this development set or participate in DCASE 2026 Task 5, please cite:
97
+
98
+ ```bibtex
99
+ @inproceedings{he2025audiomcq,
100
+ title={Measuring Audio's Impact on Correctness: Audio-Contribution-Aware Post-Training of Large Audio Language Models},
101
+ author={He, Haolin and others},
102
+ booktitle={Proceedings of the International Conference on Learning Representations (ICLR)},
103
+ year={2026}
104
+ }
105
+ ```