Update README.md
#2
by
DrowsyDuck123
- opened
README.md
CHANGED
@@ -172,3 +172,69 @@ configs:
|
|
172 |
- split: fr_segment_test
|
173 |
path: data/fr_segment_test-*
|
174 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
- split: fr_segment_test
|
173 |
path: data/fr_segment_test-*
|
174 |
---
|
175 |
+
|
176 |
+
|
177 |
+
# Dataset Card
|
178 |
+
|
179 |
+
## Dataset Description
|
180 |
+
|
181 |
+
This dataset is a benchmark based on the TalkBank[1] corpus—a large multilingual repository of conversational speech that captures real-world, unstructured interactions. We use CA-Bank [2], which focuses on phone conversations between adults, which include natural speech phenomena such as laughter, pauses, and interjections. To ensure the dataset is highly accurate and suitable for benchmarking conversational ASR systems, we employ extensive set of pre-processing.
|
182 |
+
|
183 |
+
## Preprocessing Steps
|
184 |
+
|
185 |
+
We apply the following preprocessing steps to ensure the dataset’s quality:
|
186 |
+
|
187 |
+
- Manual filtering of conversations
|
188 |
+
- Speaker-channel alignment
|
189 |
+
- Timestamp alignment using voice activity detection (VAD)
|
190 |
+
- Discarding segments based on Word Error Rate (WER) thresholds
|
191 |
+
|
192 |
+
## Paper and Code Repository
|
193 |
+
|
194 |
+
For a comprehensive explanation of the preprocessing pipeline and dataset details, refer to our paper [ASR Benchmarking: The Need for a More Representative Conversational Dataset](https://arxiv.org/abs/2409.12042) and explore our [GitHub repository](https://github.com/Diabolocom-Research/ConversationalDataset) for code and additional resources.
|
195 |
+
|
196 |
+
## Segmentation Types: Speaker Switch vs Annotation
|
197 |
+
|
198 |
+
We offer two types of segmentation for this dataset:
|
199 |
+
- **Annotation-based Segmentation**: Segments are derived directly from the annotations provided in the original TalkBank corpus.
|
200 |
+
- **Speaker Switch Segmentation**: We consolidate consecutive segments from the same speaker into a single, larger audio segment, providing an alternative structure for analysis.
|
201 |
+
|
202 |
+
|
203 |
+
|
204 |
+
## Citations
|
205 |
+
|
206 |
+
While using this dataset please cite:
|
207 |
+
|
208 |
+
```
|
209 |
+
@article{maheshwari2024asr,
|
210 |
+
title={ASR Benchmarking: Need for a More Representative Conversational Dataset},
|
211 |
+
author={Maheshwari, Gaurav and Ivanov, Dmitry and Johannet, Th{\'e}o and Haddad, Kevin El},
|
212 |
+
journal={arXiv preprint arXiv:2409.12042},
|
213 |
+
year={2024}
|
214 |
+
}
|
215 |
+
```
|
216 |
+
|
217 |
+
In addition, please acknowledge the TalkBank dataset::
|
218 |
+
|
219 |
+
```
|
220 |
+
@article{macwhinney2010transcribing,
|
221 |
+
title={Transcribing, searching and data sharing: The CLAN software and the TalkBank data repository},
|
222 |
+
author={MacWhinney, Brian and Wagner, Johannes},
|
223 |
+
journal={Gesprachsforschung: Online-Zeitschrift zur verbalen Interaktion},
|
224 |
+
volume={11},
|
225 |
+
pages={154},
|
226 |
+
year={2010},
|
227 |
+
publisher={NIH Public Access}
|
228 |
+
}
|
229 |
+
```
|
230 |
+
|
231 |
+
|
232 |
+
## Licensing Information
|
233 |
+
|
234 |
+
This dataset is released under the [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0).
|
235 |
+
|
236 |
+
## References
|
237 |
+
|
238 |
+
[1]: MacWhinney, Brian. "TalkBank: Building an open unified multimodal database of communicative interaction." (2004).
|
239 |
+
[2]: MacWhinney, Brian, and Johannes Wagner. "Transcribing, searching and data sharing: The CLAN software and the TalkBank data repository." Gesprachsforschung: Online-Zeitschrift zur verbalen Interaktion 11 (2010): 154.
|
240 |
+
|