Datasets:

Languages:
English
Multilinguality:
monolingual
ArXiv:
Tags:
License:
Files changed (2) hide show
  1. README.md +134 -11
  2. ami.py +98 -91
README.md CHANGED
@@ -1,13 +1,74 @@
1
- # AMI Corpus
2
-
3
- https://groups.inf.ed.ac.uk/ami/corpus/
4
-
5
- To be filled!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  **Note**: This dataset corresponds to the data-processing of [KALDI's AMI S5 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5).
8
  This means text is normalized and the audio data is chunked according to the scripts above!
9
  To make the user experience as simply as possible, we provide the already chunked data to the user here so that the following can be done:
10
 
 
 
 
11
  ```python
12
  from datasets import load_dataset
13
  ds = load_dataset("edinburghcstr/ami", "ihm")
@@ -18,15 +79,15 @@ gives:
18
  ```
19
  DatasetDict({
20
  train: Dataset({
21
- features: ['segment_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
22
  num_rows: 108502
23
  })
24
  validation: Dataset({
25
- features: ['segment_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
26
  num_rows: 13098
27
  })
28
  test: Dataset({
29
- features: ['segment_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
30
  num_rows: 12643
31
  })
32
  })
@@ -39,10 +100,10 @@ ds["train"][0]
39
  automatically loads the audio into memory:
40
 
41
  ```
42
- {'segment_id': 'EN2001a',
43
  'audio_id': 'AMI_EN2001a_H00_MEE068_0000557_0000594',
44
  'text': 'OKAY',
45
- 'audio': {'path': '/home/patrick_huggingface_co/.cache/huggingface/datasets/downloads/extracted/2d75d5b3e8a91f44692e2973f08b4cac53698f92c2567bd43b41d19c313a5280/EN2001a/train_ami_en2001a_h00_mee068_0000557_0000594.wav',
46
  'array': array([0. , 0. , 0. , ..., 0.00033569, 0.00030518,
47
  0.00030518], dtype=float32),
48
  'sampling_rate': 16000},
@@ -70,4 +131,66 @@ The results are in-line with results of published papers:
70
  - [*Hybrid acoustic models for distant and multichannel large vocabulary speech recognition*](https://www.researchgate.net/publication/258075865_Hybrid_acoustic_models_for_distant_and_multichannel_large_vocabulary_speech_recognition)
71
  - [Multi-Span Acoustic Modelling using Raw Waveform Signals](https://arxiv.org/abs/1906.11047)
72
 
73
- You can run [run.sh](https://huggingface.co/patrickvonplaten/ami-wav2vec2-large-lv60/blob/main/run.sh) to reproduce the result.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators: []
3
+ language:
4
+ - en
5
+ language_creators: []
6
+ license:
7
+ - cc-by-4.0
8
+ multilinguality:
9
+ - monolingual
10
+ pretty_name: AMI
11
+ size_categories: []
12
+ source_datasets: []
13
+ tags: []
14
+ task_categories:
15
+ - automatic-speech-recognition
16
+ task_ids: []
17
+ ---
18
+
19
+ # Dataset Card for AMI
20
+
21
+ ## Table of Contents
22
+ - [Table of Contents](#table-of-contents)
23
+ - [Dataset Description](#dataset-description)
24
+ - [Dataset Summary](#dataset-summary)
25
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
26
+ - [Languages](#languages)
27
+ - [Dataset Structure](#dataset-structure)
28
+ - [Data Instances](#data-instances)
29
+ - [Data Fields](#data-fields)
30
+ - [Data Splits](#data-splits)
31
+ - [Dataset Creation](#dataset-creation)
32
+ - [Curation Rationale](#curation-rationale)
33
+ - [Source Data](#source-data)
34
+ - [Annotations](#annotations)
35
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
36
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
37
+ - [Social Impact of Dataset](#social-impact-of-dataset)
38
+ - [Discussion of Biases](#discussion-of-biases)
39
+ - [Other Known Limitations](#other-known-limitations)
40
+ - [Additional Information](#additional-information)
41
+ - [Dataset Curators](#dataset-curators)
42
+ - [Licensing Information](#licensing-information)
43
+ - [Citation Information](#citation-information)
44
+ - [Contributions](#contributions)
45
+ - [Terms of Usage](#terms-of-usage)
46
+
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/
51
+ - **Repository:** https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5
52
+ - **Paper:**
53
+ - **Leaderboard:**
54
+ - **Point of Contact:** [jonathan@ed.ac.uk](mailto:jonathan@ed.ac.uk)
55
+
56
+ ## Dataset Description
57
+
58
+ The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
59
+ synchronized to a common timeline. These include close-talking and far-field microphones, individual and
60
+ room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
61
+ the participants also have unsynchronized pens available to them that record what is written. The meetings
62
+ were recorded in English using three different rooms with different acoustic properties, and include mostly
63
+ non-native speakers.
64
 
65
  **Note**: This dataset corresponds to the data-processing of [KALDI's AMI S5 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5).
66
  This means text is normalized and the audio data is chunked according to the scripts above!
67
  To make the user experience as simply as possible, we provide the already chunked data to the user here so that the following can be done:
68
 
69
+
70
+ ### Example Usage
71
+
72
  ```python
73
  from datasets import load_dataset
74
  ds = load_dataset("edinburghcstr/ami", "ihm")
 
79
  ```
80
  DatasetDict({
81
  train: Dataset({
82
+ features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
83
  num_rows: 108502
84
  })
85
  validation: Dataset({
86
+ features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
87
  num_rows: 13098
88
  })
89
  test: Dataset({
90
+ features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
91
  num_rows: 12643
92
  })
93
  })
 
100
  automatically loads the audio into memory:
101
 
102
  ```
103
+ {'meeting_id': 'EN2001a',
104
  'audio_id': 'AMI_EN2001a_H00_MEE068_0000557_0000594',
105
  'text': 'OKAY',
106
+ 'audio': {'path': '/cache/dir/path/downloads/extracted/2d75d5b3e8a91f44692e2973f08b4cac53698f92c2567bd43b41d19c313a5280/EN2001a/train_ami_en2001a_h00_mee068_0000557_0000594.wav',
107
  'array': array([0. , 0. , 0. , ..., 0.00033569, 0.00030518,
108
  0.00030518], dtype=float32),
109
  'sampling_rate': 16000},
 
131
  - [*Hybrid acoustic models for distant and multichannel large vocabulary speech recognition*](https://www.researchgate.net/publication/258075865_Hybrid_acoustic_models_for_distant_and_multichannel_large_vocabulary_speech_recognition)
132
  - [Multi-Span Acoustic Modelling using Raw Waveform Signals](https://arxiv.org/abs/1906.11047)
133
 
134
+ You can run [run.sh](https://huggingface.co/patrickvonplaten/ami-wav2vec2-large-lv60/blob/main/run.sh) to reproduce the result.
135
+
136
+ ### Supported Tasks and Leaderboards
137
+
138
+ ### Languages
139
+
140
+ ## Dataset Structure
141
+
142
+ ### Data Instances
143
+
144
+ ### Data Fields
145
+
146
+ ### Data Splits
147
+
148
+ #### Transcribed Subsets Size
149
+
150
+ ## Dataset Creation
151
+
152
+ ### Curation Rationale
153
+
154
+ ### Source Data
155
+
156
+ #### Initial Data Collection and Normalization
157
+
158
+ #### Who are the source language producers?
159
+
160
+ ### Annotations
161
+
162
+ #### Annotation process
163
+
164
+ #### Who are the annotators?
165
+
166
+ ### Personal and Sensitive Information
167
+
168
+ ## Considerations for Using the Data
169
+
170
+ ### Social Impact of Dataset
171
+
172
+ [More Information Needed]
173
+
174
+ ### Discussion of Biases
175
+
176
+ ### Other Known Limitations
177
+
178
+ ## Additional Information
179
+
180
+ ### Dataset Curators
181
+
182
+
183
+ ### Licensing Information
184
+
185
+
186
+ ### Citation Information
187
+
188
+
189
+ ### Contributions
190
+
191
+ Thanks to [@sanchit-gandhi](https://github.com/sanchit-gandhi), [@patrickvonplaten](https://github.com/patrickvonplaten),
192
+ and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
193
+
194
+ ## Terms of Usage
195
+
196
+
ami.py CHANGED
@@ -1,4 +1,4 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
  #
3
  # Licensed under the Apache License, Version 2.0 (the "License");
4
  # you may not use this file except in compliance with the License.
@@ -12,71 +12,57 @@
12
  # See the License for the specific language governing permissions and
13
  # limitations under the License.
14
  """
15
- GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
16
- labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
17
- and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
18
- and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
19
- sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
20
- for speech recognition training, and to filter out segments with low-quality transcription. For system training,
21
- GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
22
- For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
23
- and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
24
- are re-processed by professional human transcribers to ensure high transcription quality.
25
  """
26
 
27
- import csv
28
  import os
29
 
30
  import datasets
31
 
32
  _CITATION = """\
33
- @article{DBLP:journals/corr/abs-2106-06909,
34
- author = {Guoguo Chen and
35
- Shuzhou Chai and
36
- Guanbo Wang and
37
- Jiayu Du and
38
- Wei{-}Qiang Zhang and
39
- Chao Weng and
40
- Dan Su and
41
- Daniel Povey and
42
- Jan Trmal and
43
- Junbo Zhang and
44
- Mingjie Jin and
45
- Sanjeev Khudanpur and
46
- Shinji Watanabe and
47
- Shuaijiang Zhao and
48
- Wei Zou and
49
- Xiangang Li and
50
- Xuchen Yao and
51
- Yongqing Wang and
52
- Yujun Wang and
53
- Zhao You and
54
- Zhiyong Yan},
55
- title = {GigaSpeech: An Evolving, Multi-domain {ASR} Corpus with 10, 000 Hours
56
- of Transcribed Audio},
57
- journal = {CoRR},
58
- volume = {abs/2106.06909},
59
- year = {2021},
60
- url = {https://arxiv.org/abs/2106.06909},
61
- eprinttype = {arXiv},
62
- eprint = {2106.06909},
63
- timestamp = {Wed, 29 Dec 2021 14:29:26 +0100},
64
- biburl = {https://dblp.org/rec/journals/corr/abs-2106-06909.bib},
65
- bibsource = {dblp computer science bibliography, https://dblp.org}
66
  }
67
  """
68
 
69
  _DESCRIPTION = """\
70
- GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
71
- labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
72
- and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
73
- and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
74
- sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
75
- for speech recognition training, and to filter out segments with low-quality transcription. For system training,
76
- GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
77
- For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
78
- and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
79
- are re-processed by professional human transcribers to ensure high transcription quality.
80
  """
81
 
82
  _HOMEPAGE = "https://groups.inf.ed.ac.uk/ami/corpus/"
@@ -263,6 +249,12 @@ _EVAL_SAMPLE_IDS = [
263
  "TS3003d",
264
  ]
265
 
 
 
 
 
 
 
266
  _SUBSETS = ("ihm",)
267
 
268
  _BASE_DATA_URL = "https://huggingface.co/datasets/edinburghcstr/ami/resolve/main/"
@@ -283,19 +275,6 @@ class AMIConfig(datasets.BuilderConfig):
283
 
284
 
285
  class AMI(datasets.GeneratorBasedBuilder):
286
- """
287
- GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
288
- labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
289
- and unsupervised training (this implementation contains only labelled data for now).
290
- Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
291
- and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
292
- sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
293
- for speech recognition training, and to filter out segments with low-quality transcription. For system training,
294
- GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
295
- For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
296
- and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
297
- are re-processed by professional human transcribers to ensure high transcription quality.
298
- """
299
 
300
  VERSION = datasets.Version("1.0.0")
301
 
@@ -308,7 +287,7 @@ class AMI(datasets.GeneratorBasedBuilder):
308
  def _info(self):
309
  features = datasets.Features(
310
  {
311
- "segment_id": datasets.Value("string"),
312
  "audio_id": datasets.Value("string"),
313
  "text": datasets.Value("string"),
314
  "audio": datasets.Audio(sampling_rate=16_000),
@@ -327,46 +306,68 @@ class AMI(datasets.GeneratorBasedBuilder):
327
  )
328
 
329
  def _split_generators(self, dl_manager):
330
- train_audio_files = {m: _AUDIO_ARCHIVE_URL.format(subset=self.config.name, split="train", _id=m) for m in _TRAIN_SAMPLE_IDS}
331
- dev_audio_files = {m: _AUDIO_ARCHIVE_URL.format(subset=self.config.name, split="dev", _id=m) for m in _VALIDATION_SAMPLE_IDS}
332
- eval_audio_files = {m: _AUDIO_ARCHIVE_URL.format(subset=self.config.name, split="eval", _id=m) for m in _EVAL_SAMPLE_IDS}
 
 
 
 
333
 
334
- train_audio_archives = dl_manager.download_and_extract(train_audio_files)
335
- dev_audio_archives = dl_manager.download_and_extract(dev_audio_files)
336
- eval_audio_archives = dl_manager.download_and_extract(eval_audio_files)
 
337
 
338
- train_annotation = dl_manager.download_and_extract(_ANNOTATIONS_ARCHIVE_URL.format(split="train"))
339
- dev_annotation = dl_manager.download_and_extract(_ANNOTATIONS_ARCHIVE_URL.format(split="dev"))
340
- eval_annotation = dl_manager.download_and_extract(_ANNOTATIONS_ARCHIVE_URL.format(split="eval"))
341
 
342
  return [
343
  datasets.SplitGenerator(
344
  name=datasets.Split.TRAIN,
345
- gen_kwargs={"audio": train_audio_archives, "annotation": train_annotation, "split": "train"},
 
 
 
 
 
346
  ),
347
  datasets.SplitGenerator(
348
  name=datasets.Split.VALIDATION,
349
- gen_kwargs={"audio": dev_audio_archives, "annotation": dev_annotation, "split": "dev"},
 
 
 
 
 
350
  ),
351
  datasets.SplitGenerator(
352
  name=datasets.Split.TEST,
353
- gen_kwargs={"audio": eval_audio_archives, "annotation": eval_annotation, "split": "eval"},
 
 
 
 
 
354
  ),
355
  ]
356
 
357
- def _generate_examples(self, audio, annotation, split):
358
  # open annotation file
 
 
359
  with open(annotation, "r", encoding="utf-8") as f:
360
  transcriptions = {}
361
  for line in f.readlines():
362
  line_items = line.strip().split()
363
  _id = line_items[0]
364
  text = " ".join(line_items[1:])
365
- _, segment_id, microphone_id, speaker_id, begin_time, end_time = _id.split("_")
 
366
 
367
- transcriptions[_id] = {
368
  "audio_id": _id,
369
- "segment_id": segment_id,
370
  "text": text,
371
  "begin_time": int(begin_time) / 100,
372
  "end_time": int(end_time) / 100,
@@ -374,10 +375,16 @@ class AMI(datasets.GeneratorBasedBuilder):
374
  "speaker_id": speaker_id,
375
  }
376
 
377
- for _audio_id, (transcription_id, result) in enumerate(transcriptions.items()):
378
- folder_id = result["segment_id"]
379
- file_name = "_".join([split, transcription_id.lower()]) + ".wav"
380
- audio_file = os.path.join(audio[folder_id], folder_id, file_name)
381
- result["audio"] = audio_file
382
- yield _audio_id, result
383
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
2
  #
3
  # Licensed under the Apache License, Version 2.0 (the "License");
4
  # you may not use this file except in compliance with the License.
 
12
  # See the License for the specific language governing permissions and
13
  # limitations under the License.
14
  """
15
+ The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
16
+ synchronized to a common timeline. These include close-talking and far-field microphones, individual and
17
+ room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
18
+ the participants also have unsynchronized pens available to them that record what is written. The meetings
19
+ were recorded in English using three different rooms with different acoustic properties, and include mostly
20
+ non-native speakers.
 
 
 
 
21
  """
22
 
 
23
  import os
24
 
25
  import datasets
26
 
27
  _CITATION = """\
28
+ @inproceedings{10.1007/11677482_3,
29
+ author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre},
30
+ title = {The AMI Meeting Corpus: A Pre-Announcement},
31
+ year = {2005},
32
+ isbn = {3540325492},
33
+ publisher = {Springer-Verlag},
34
+ address = {Berlin, Heidelberg},
35
+ url = {https://doi.org/10.1007/11677482_3},
36
+ doi = {10.1007/11677482_3},
37
+ abstract = {The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting
38
+ recordings. It is being created in the context of a project that is developing meeting
39
+ browsing technology and will eventually be released publicly. Some of the meetings
40
+ it contains are naturally occurring, and some are elicited, particularly using a scenario
41
+ in which the participants play different roles in a design team, taking a design project
42
+ from kick-off to completion over the course of a day. The corpus is being recorded
43
+ using a wide range of devices including close-talking and far-field microphones, individual
44
+ and room-view video cameras, projection, a whiteboard, and individual pens, all of
45
+ which produce output signals that are synchronized with each other. It is also being
46
+ hand-annotated for many different phenomena, including orthographic transcription,
47
+ discourse properties such as named entities and dialogue acts, summaries, emotions,
48
+ and some head and hand gestures. We describe the data set, including the rationale
49
+ behind using elicited material, and explain how the material is being recorded, transcribed
50
+ and annotated.},
51
+ booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction},
52
+ pages = {28–39},
53
+ numpages = {12},
54
+ location = {Edinburgh, UK},
55
+ series = {MLMI'05}
 
 
 
 
 
56
  }
57
  """
58
 
59
  _DESCRIPTION = """\
60
+ The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
61
+ synchronized to a common timeline. These include close-talking and far-field microphones, individual and
62
+ room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
63
+ the participants also have unsynchronized pens available to them that record what is written. The meetings
64
+ were recorded in English using three different rooms with different acoustic properties, and include mostly
65
+ non-native speakers. \n
 
 
 
 
66
  """
67
 
68
  _HOMEPAGE = "https://groups.inf.ed.ac.uk/ami/corpus/"
 
249
  "TS3003d",
250
  ]
251
 
252
+ _SAMPLE_IDS = {
253
+ "train": _TRAIN_SAMPLE_IDS,
254
+ "dev": _VALIDATION_SAMPLE_IDS,
255
+ "eval": _EVAL_SAMPLE_IDS,
256
+ }
257
+
258
  _SUBSETS = ("ihm",)
259
 
260
  _BASE_DATA_URL = "https://huggingface.co/datasets/edinburghcstr/ami/resolve/main/"
 
275
 
276
 
277
  class AMI(datasets.GeneratorBasedBuilder):
 
 
 
 
 
 
 
 
 
 
 
 
 
278
 
279
  VERSION = datasets.Version("1.0.0")
280
 
 
287
  def _info(self):
288
  features = datasets.Features(
289
  {
290
+ "meeting_id": datasets.Value("string"),
291
  "audio_id": datasets.Value("string"),
292
  "text": datasets.Value("string"),
293
  "audio": datasets.Audio(sampling_rate=16_000),
 
306
  )
307
 
308
  def _split_generators(self, dl_manager):
309
+ splits = ["train", "dev", "eval"]
310
+
311
+ audio_archives_urls = {}
312
+ for split in splits:
313
+ audio_archives_urls[split] = [
314
+ _AUDIO_ARCHIVE_URL.format(subset=self.config.name, split=split, _id=m) for m in _SAMPLE_IDS[split]
315
+ ]
316
 
317
+ audio_archives = dl_manager.download(audio_archives_urls)
318
+ local_extracted_archives_paths = dl_manager.extract(audio_archives) if not dl_manager.is_streaming else {
319
+ split: [None] * len(audio_archives[split]) for split in splits
320
+ }
321
 
322
+ annotations_urls = {split: _ANNOTATIONS_ARCHIVE_URL.format(split=split) for split in splits}
323
+ annotations = dl_manager.download(annotations_urls)
 
324
 
325
  return [
326
  datasets.SplitGenerator(
327
  name=datasets.Split.TRAIN,
328
+ gen_kwargs={
329
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_archives["train"]],
330
+ "local_extracted_archives_paths": local_extracted_archives_paths["train"],
331
+ "annotation": annotations["train"],
332
+ "split": "train"
333
+ },
334
  ),
335
  datasets.SplitGenerator(
336
  name=datasets.Split.VALIDATION,
337
+ gen_kwargs={
338
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_archives["dev"]],
339
+ "local_extracted_archives_paths": local_extracted_archives_paths["dev"],
340
+ "annotation": annotations["dev"],
341
+ "split": "dev"
342
+ },
343
  ),
344
  datasets.SplitGenerator(
345
  name=datasets.Split.TEST,
346
+ gen_kwargs={
347
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_archives["eval"]],
348
+ "local_extracted_archives_paths": local_extracted_archives_paths["eval"],
349
+ "annotation": annotations["eval"],
350
+ "split": "eval"
351
+ },
352
  ),
353
  ]
354
 
355
+ def _generate_examples(self, audio_archives, local_extracted_archives_paths, annotation, split):
356
  # open annotation file
357
+ assert len(audio_archives) == len(local_extracted_archives_paths)
358
+
359
  with open(annotation, "r", encoding="utf-8") as f:
360
  transcriptions = {}
361
  for line in f.readlines():
362
  line_items = line.strip().split()
363
  _id = line_items[0]
364
  text = " ".join(line_items[1:])
365
+ _, meeting_id, microphone_id, speaker_id, begin_time, end_time = _id.split("_")
366
+ audio_filename = "_".join([split, _id.lower()]) + ".wav"
367
 
368
+ transcriptions[audio_filename] = {
369
  "audio_id": _id,
370
+ "meeting_id": meeting_id,
371
  "text": text,
372
  "begin_time": int(begin_time) / 100,
373
  "end_time": int(end_time) / 100,
 
375
  "speaker_id": speaker_id,
376
  }
377
 
378
+ features = ["meeting_id", "audio_id", "text", "begin_time", "end_time", "microphone_id", "speaker_id"]
379
+ for archive, local_archive_path in zip(audio_archives, local_extracted_archives_paths):
380
+ for audio_path, audio_file in archive:
381
+ # audio_path is like 'EN2001a/train_ami_en2001a_h00_mee068_0414915_0415078.wav'
382
+ audio_meta = transcriptions[audio_path.split("/")[-1]]
 
383
 
384
+ yield audio_path, {
385
+ "audio": {
386
+ "path": os.path.join(local_archive_path, audio_path) if local_archive_path else audio_path,
387
+ "bytes": audio_file.read(),
388
+ },
389
+ **{feature: audio_meta[feature] for feature in features}
390
+ }