nccm2p2 commited on
Commit
d05d50b
Β·
verified Β·
1 Parent(s): 00afee5

upload README.md

Browse files
Files changed (1) hide show
  1. README.md +225 -3
README.md CHANGED
@@ -1,3 +1,225 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸŽ₯ MLD-VC: Multimodal Dataset for Video Conferencing
2
+
3
+ > **When AVSR Meets Video Conferencing: Dataset, Degradation, and the Hidden Mechanism Behind Performance Collapse (CVPR 2026)**
4
+ > πŸ“„ [[Paper\]](https://arxiv.org/abs/2603.22915) | πŸ€— [[Hugging Face Dataset\]](https://huggingface.co/datasets/nccm2p2/MLD-VC)
5
+
6
+ ------
7
+
8
+ ## πŸ“Œ Overview
9
+
10
+ **MLD-VC** is the **first multimodal dataset specifically designed for Audio-Visual Speech Recognition (AVSR) in real-world video conferencing (VC) scenarios**.
11
+
12
+ Unlike traditional AVSR datasets collected in controlled offline environments, MLD-VC explicitly models two critical factors in VC:
13
+
14
+ - **Transmission Distortions** (compression, speech enhancement, etc.)
15
+ - **Human Hyper-expression** (e.g., Lombard effect)
16
+
17
+ ### πŸ” Key Features
18
+
19
+ - 🎀 **31 speakers**, 22.79 hours of recordings
20
+ - 🌐 **4 mainstream VC platforms**
21
+ - πŸ—£οΈ **Bilingual**: English & Chinese
22
+ - 🎧 **Lombard effect simulation** via noise conditions
23
+ - πŸŽ₯ Multimodal data:
24
+ - Video
25
+ - Audio
26
+ - Facial landmarks
27
+ - text
28
+
29
+ ------
30
+
31
+ ## 🚨 Motivation
32
+
33
+ Existing AVSR systems show **severe performance degradation in video conferencing**, due to:
34
+
35
+ - Distribution shift caused by **speech enhancement algorithms**
36
+ - Behavioral changes such as **hyper-expression**
37
+
38
+ MLD-VC is designed to **bridge the gap between offline datasets and real-world VC deployment**.
39
+
40
+ ------
41
+
42
+ ## πŸ“‚ Dataset Structure
43
+
44
+ The dataset is organized into three aligned modalities:
45
+
46
+ ```
47
+ MLD-VC/
48
+ β”œβ”€β”€ video/
49
+ β”œβ”€β”€ audio/
50
+ β”œβ”€β”€ landmarks/
51
+ ```
52
+
53
+ Each modality follows the **same hierarchical structure**:
54
+
55
+ ```
56
+ <modality>/
57
+ └── Online / Offline
58
+ └── speaker_id
59
+ └── platform
60
+ └── sentence_id
61
+ └── clean / 40db / 60db / 80db
62
+ ```
63
+
64
+ ### πŸ“– Example
65
+
66
+ ```
67
+ video/
68
+ └── Online/
69
+ └── speaker_03/
70
+ └── Zoom/
71
+ └── sentence_012/
72
+ β”œβ”€β”€ clean/
73
+ β”œβ”€β”€ 40db/
74
+ β”œβ”€β”€ 60db/
75
+ └── 80db/
76
+ ```
77
+
78
+ ------
79
+
80
+ ## 🧠 Data Description
81
+
82
+ ### 1. Online vs Offline
83
+
84
+ - **Offline**:
85
+ - Direct recording (no transmission)
86
+ - Contains hyper-expression (via noise)
87
+ - **Online**:
88
+ - Recorded after transmission through VC platforms
89
+ - Includes:
90
+ - Compression
91
+ - Speech enhancement
92
+ - Network effects
93
+
94
+ ------
95
+
96
+ ### 2. Noise Levels (Lombard Effect)
97
+
98
+ Each sentence is recorded under 4 noise conditions:
99
+
100
+ | Condition | Description |
101
+ | --------- | -------------- |
102
+ | clean | No noise |
103
+ | 40dB | Mild noise |
104
+ | 60dB | Moderate noise |
105
+ | 80dB | Strong noise |
106
+
107
+ These simulate **Lombard effect intensity**, inducing hyper-expression.
108
+
109
+ ------
110
+
111
+ ### 3. Platforms
112
+
113
+ The dataset includes recordings from multiple VC platforms (e.g.):
114
+
115
+ - Zoom
116
+ - Tencent Meeting
117
+ - Lark
118
+ - DingTalk
119
+
120
+ ------
121
+
122
+ ## ⚠️ Important Notes
123
+
124
+ ### πŸ” Recording Protocol Differences
125
+
126
+ - In **Offline subset**:
127
+ - **Speakers 2–8**:
128
+ - Recorded on **a single device**, repeated across 4 platforms
129
+ - Other speakers:
130
+ - **DD platform only**, but actually recorded using **4 different devices simultaneously**
131
+
132
+ πŸ‘‰ This leads to:
133
+
134
+ - Platform variation β‰  always device variation
135
+ - Be careful in **cross-platform generalization experiments**
136
+
137
+ ------
138
+
139
+ ### ❌ Removed Speakers
140
+
141
+ - **Speaker 0 and 1 have been removed**
142
+ - Due to poor recording quality
143
+
144
+ ------
145
+
146
+ ### πŸ“ Data Consistency
147
+
148
+ - All three modalities (`video`, `audio`, `landmarks`) are:
149
+ - **Strictly aligned**
150
+ - Share identical folder structure
151
+ - Can be indexed jointly
152
+
153
+ ------
154
+
155
+ ## πŸ”¬ Recommended Use Cases
156
+
157
+ MLD-VC is suitable for:
158
+
159
+ ### βœ” AVSR Robustness
160
+
161
+ - Evaluate performance under real VC conditions
162
+
163
+ ### βœ” Cross-domain Generalization
164
+
165
+ - Train on Offline β†’ Test on Online
166
+
167
+ ### βœ” Multimodal Learning
168
+
169
+ - Audio-visual fusion
170
+ - Landmark-based modeling
171
+
172
+ ### βœ” Distribution Shift Analysis
173
+
174
+ - Study impact of:
175
+ - Speech enhancement
176
+ - Lombard effect
177
+
178
+ ------
179
+
180
+ ## πŸ“Š Key Findings (from the paper)
181
+
182
+ - AVSR models suffer **massive degradation in VC**
183
+ - **Speech enhancement** is the main cause of audio distribution shift
184
+ - **Lombard effect β‰ˆ VC distortion (in feature space)**
185
+ - Landmark-based features are **more stable than image features**
186
+ - Fine-tuning on MLD-VC reduces CER by **17.5%**
187
+
188
+ ------
189
+
190
+ ## πŸ“Ž Citation
191
+
192
+ If you find this dataset useful, please cite:
193
+
194
+ ```bibtex
195
+ @inproceedings{huang2026mldvc,
196
+ title={When AVSR Meets Video Conferencing: Dataset, Degradation, and the Hidden Mechanism Behind Performance Collapse},
197
+ author={Huang, Yihuan and Xue, Jun and Liu, Jiajun and Li, Daixian and Zhang, Tong and Yi, Zhuolin and Ren, Yanzhen and Li, Kai},
198
+ booktitle={CVPR},
199
+ year={2026}
200
+ }
201
+ ```
202
+
203
+ ------
204
+
205
+ ## πŸ™ Acknowledgements
206
+
207
+ This work is supported by:
208
+
209
+ - National Natural Science Foundation of China
210
+ - DiDi Chuxing Group
211
+
212
+ ------
213
+
214
+ ## πŸ“¬ Contact
215
+
216
+ If you have questions, feel free to contact:
217
+
218
+ - **Yihuan Huang**: [yihuanhuang@whu.edu.cn](mailto:yihuanhuang@whu.edu.cn)
219
+
220
+ ------
221
+
222
+ ## ⭐ Star This Repo
223
+
224
+ If you find MLD-VC helpful, please consider giving a ⭐!
225
+