patrickvonplaten commited on
Commit
466f219
β€’
1 Parent(s): a2f2049
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. ami-ihm-kaldi-chunked.py +403 -0
  2. audio/{dev β†’ ihm/dev}/ES2011a.tar.gz +0 -0
  3. audio/{dev β†’ ihm/dev}/ES2011b.tar.gz +0 -0
  4. audio/{dev β†’ ihm/dev}/ES2011c.tar.gz +0 -0
  5. audio/{dev β†’ ihm/dev}/ES2011d.tar.gz +0 -0
  6. audio/{dev β†’ ihm/dev}/IB4001.tar.gz +0 -0
  7. audio/{dev β†’ ihm/dev}/IB4002.tar.gz +0 -0
  8. audio/{dev β†’ ihm/dev}/IB4003.tar.gz +0 -0
  9. audio/{dev β†’ ihm/dev}/IB4004.tar.gz +0 -0
  10. audio/{dev β†’ ihm/dev}/IB4010.tar.gz +0 -0
  11. audio/{dev β†’ ihm/dev}/IB4011.tar.gz +0 -0
  12. audio/{dev β†’ ihm/dev}/IS1008a.tar.gz +0 -0
  13. audio/{dev β†’ ihm/dev}/IS1008b.tar.gz +0 -0
  14. audio/{dev β†’ ihm/dev}/IS1008c.tar.gz +0 -0
  15. audio/{dev β†’ ihm/dev}/IS1008d.tar.gz +0 -0
  16. audio/{dev β†’ ihm/dev}/TS3004a.tar.gz +0 -0
  17. audio/{dev β†’ ihm/dev}/TS3004b.tar.gz +0 -0
  18. audio/{dev β†’ ihm/dev}/TS3004c.tar.gz +0 -0
  19. audio/{dev β†’ ihm/dev}/TS3004d.tar.gz +0 -0
  20. audio/{eval β†’ ihm/eval}/EN2002a.tar.gz +0 -0
  21. audio/{eval β†’ ihm/eval}/EN2002b.tar.gz +0 -0
  22. audio/{eval β†’ ihm/eval}/EN2002c.tar.gz +0 -0
  23. audio/{eval β†’ ihm/eval}/EN2002d.tar.gz +0 -0
  24. audio/{eval β†’ ihm/eval}/ES2004a.tar.gz +0 -0
  25. audio/{eval β†’ ihm/eval}/ES2004b.tar.gz +0 -0
  26. audio/{eval β†’ ihm/eval}/ES2004c.tar.gz +0 -0
  27. audio/{eval β†’ ihm/eval}/ES2004d.tar.gz +0 -0
  28. audio/{eval β†’ ihm/eval}/IS1009a.tar.gz +0 -0
  29. audio/{eval β†’ ihm/eval}/IS1009b.tar.gz +0 -0
  30. audio/{eval β†’ ihm/eval}/IS1009c.tar.gz +0 -0
  31. audio/{eval β†’ ihm/eval}/IS1009d.tar.gz +0 -0
  32. audio/{eval β†’ ihm/eval}/TS3003a.tar.gz +0 -0
  33. audio/{eval β†’ ihm/eval}/TS3003b.tar.gz +0 -0
  34. audio/{eval β†’ ihm/eval}/TS3003c.tar.gz +0 -0
  35. audio/{eval β†’ ihm/eval}/TS3003d.tar.gz +0 -0
  36. audio/{train β†’ ihm/train}/EN2001a.tar.gz +0 -0
  37. audio/{train β†’ ihm/train}/EN2001b.tar.gz +0 -0
  38. audio/{train β†’ ihm/train}/EN2001d.tar.gz +0 -0
  39. audio/{train β†’ ihm/train}/EN2001e.tar.gz +0 -0
  40. audio/{train β†’ ihm/train}/EN2003a.tar.gz +0 -0
  41. audio/{train β†’ ihm/train}/EN2004a.tar.gz +0 -0
  42. audio/{train β†’ ihm/train}/EN2005a.tar.gz +0 -0
  43. audio/{train β†’ ihm/train}/EN2006a.tar.gz +0 -0
  44. audio/{train β†’ ihm/train}/EN2006b.tar.gz +0 -0
  45. audio/{train β†’ ihm/train}/EN2009b.tar.gz +0 -0
  46. audio/{train β†’ ihm/train}/EN2009c.tar.gz +0 -0
  47. audio/{train β†’ ihm/train}/EN2009d.tar.gz +0 -0
  48. audio/{train β†’ ihm/train}/ES2002a.tar.gz +0 -0
  49. audio/{train β†’ ihm/train}/ES2002b.tar.gz +0 -0
  50. audio/{train β†’ ihm/train}/ES2002c.tar.gz +0 -0
ami-ihm-kaldi-chunked.py ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """
15
+ GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
16
+ labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
17
+ and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
18
+ and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
19
+ sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
20
+ for speech recognition training, and to filter out segments with low-quality transcription. For system training,
21
+ GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
22
+ For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
23
+ and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
24
+ are re-processed by professional human transcribers to ensure high transcription quality.
25
+ """
26
+
27
+ import csv
28
+ import os
29
+
30
+ import datasets
31
+
32
+ _CITATION = """\
33
+ @article{DBLP:journals/corr/abs-2106-06909,
34
+ author = {Guoguo Chen and
35
+ Shuzhou Chai and
36
+ Guanbo Wang and
37
+ Jiayu Du and
38
+ Wei{-}Qiang Zhang and
39
+ Chao Weng and
40
+ Dan Su and
41
+ Daniel Povey and
42
+ Jan Trmal and
43
+ Junbo Zhang and
44
+ Mingjie Jin and
45
+ Sanjeev Khudanpur and
46
+ Shinji Watanabe and
47
+ Shuaijiang Zhao and
48
+ Wei Zou and
49
+ Xiangang Li and
50
+ Xuchen Yao and
51
+ Yongqing Wang and
52
+ Yujun Wang and
53
+ Zhao You and
54
+ Zhiyong Yan},
55
+ title = {GigaSpeech: An Evolving, Multi-domain {ASR} Corpus with 10, 000 Hours
56
+ of Transcribed Audio},
57
+ journal = {CoRR},
58
+ volume = {abs/2106.06909},
59
+ year = {2021},
60
+ url = {https://arxiv.org/abs/2106.06909},
61
+ eprinttype = {arXiv},
62
+ eprint = {2106.06909},
63
+ timestamp = {Wed, 29 Dec 2021 14:29:26 +0100},
64
+ biburl = {https://dblp.org/rec/journals/corr/abs-2106-06909.bib},
65
+ bibsource = {dblp computer science bibliography, https://dblp.org}
66
+ }
67
+ """
68
+
69
+ _DESCRIPTION = """\
70
+ GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
71
+ labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
72
+ and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
73
+ and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
74
+ sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
75
+ for speech recognition training, and to filter out segments with low-quality transcription. For system training,
76
+ GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
77
+ For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
78
+ and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
79
+ are re-processed by professional human transcribers to ensure high transcription quality.
80
+ """
81
+
82
+ _HOMEPAGE = "https://groups.inf.ed.ac.uk/ami/corpus/"
83
+
84
+ _LICENSE = "CC BY 4.0"
85
+
86
+ _TRAIN_SAMPLE_IDS = [
87
+ "EN2001a",
88
+ "EN2001b",
89
+ "EN2001d",
90
+ "EN2001e",
91
+ "EN2003a",
92
+ "EN2004a",
93
+ "EN2005a",
94
+ "EN2006a",
95
+ "EN2006b",
96
+ "EN2009b",
97
+ "EN2009c",
98
+ "EN2009d",
99
+ "ES2002a",
100
+ "ES2002b",
101
+ "ES2002c",
102
+ "ES2002d",
103
+ "ES2003a",
104
+ "ES2003b",
105
+ "ES2003c",
106
+ "ES2003d",
107
+ "ES2005a",
108
+ "ES2005b",
109
+ "ES2005c",
110
+ "ES2005d",
111
+ "ES2006a",
112
+ "ES2006b",
113
+ "ES2006c",
114
+ "ES2006d",
115
+ "ES2007a",
116
+ "ES2007b",
117
+ "ES2007c",
118
+ "ES2007d",
119
+ "ES2008a",
120
+ "ES2008b",
121
+ "ES2008c",
122
+ "ES2008d",
123
+ "ES2009a",
124
+ "ES2009b",
125
+ "ES2009c",
126
+ "ES2009d",
127
+ "ES2010a",
128
+ "ES2010b",
129
+ "ES2010c",
130
+ "ES2010d",
131
+ "ES2012a",
132
+ "ES2012b",
133
+ "ES2012c",
134
+ "ES2012d",
135
+ "ES2013a",
136
+ "ES2013b",
137
+ "ES2013c",
138
+ "ES2013d",
139
+ "ES2014a",
140
+ "ES2014b",
141
+ "ES2014c",
142
+ "ES2014d",
143
+ "ES2015a",
144
+ "ES2015b",
145
+ "ES2015c",
146
+ "ES2015d",
147
+ "ES2016a",
148
+ "ES2016b",
149
+ "ES2016c",
150
+ "ES2016d",
151
+ "IB4005",
152
+ "IN1001",
153
+ "IN1002",
154
+ "IN1005",
155
+ "IN1007",
156
+ "IN1008",
157
+ "IN1009",
158
+ "IN1012",
159
+ "IN1013",
160
+ "IN1014",
161
+ "IN1016",
162
+ "IS1000a",
163
+ "IS1000b",
164
+ "IS1000c",
165
+ "IS1000d",
166
+ "IS1001a",
167
+ "IS1001b",
168
+ "IS1001c",
169
+ "IS1001d",
170
+ "IS1002b",
171
+ "IS1002c",
172
+ "IS1002d",
173
+ "IS1003a",
174
+ "IS1003b",
175
+ "IS1003c",
176
+ "IS1003d",
177
+ "IS1004a",
178
+ "IS1004b",
179
+ "IS1004c",
180
+ "IS1004d",
181
+ "IS1005a",
182
+ "IS1005b",
183
+ "IS1005c",
184
+ "IS1006a",
185
+ "IS1006b",
186
+ "IS1006c",
187
+ "IS1006d",
188
+ "IS1007a",
189
+ "IS1007b",
190
+ "IS1007c",
191
+ "IS1007d",
192
+ "TS3005a",
193
+ "TS3005b",
194
+ "TS3005c",
195
+ "TS3005d",
196
+ "TS3006a",
197
+ "TS3006b",
198
+ "TS3006c",
199
+ "TS3006d",
200
+ "TS3007a",
201
+ "TS3007b",
202
+ "TS3007c",
203
+ "TS3007d",
204
+ "TS3008a",
205
+ "TS3008b",
206
+ "TS3008c",
207
+ "TS3008d",
208
+ "TS3009a",
209
+ "TS3009b",
210
+ "TS3009c",
211
+ "TS3009d",
212
+ "TS3010a",
213
+ "TS3010b",
214
+ "TS3010c",
215
+ "TS3010d",
216
+ "TS3011a",
217
+ "TS3011b",
218
+ "TS3011c",
219
+ "TS3011d",
220
+ "TS3012a",
221
+ "TS3012b",
222
+ "TS3012c",
223
+ "TS3012d",
224
+ ]
225
+
226
+ _VALIDATION_SAMPLE_IDS = [
227
+ "ES2011a",
228
+ "ES2011c",
229
+ "IB4001",
230
+ "IB4003",
231
+ "IB4010",
232
+ "IS1008a",
233
+ "IS1008c",
234
+ "TS3004a",
235
+ "TS3004c",
236
+ "ES2011b",
237
+ "ES2011d",
238
+ "IB4002",
239
+ "IB4004",
240
+ "IB4011",
241
+ "IS1008b",
242
+ "IS1008d",
243
+ "TS3004b",
244
+ "TS3004d",
245
+ ]
246
+
247
+ _EVAL_SAMPLE_IDS = [
248
+ "EN2002a",
249
+ "EN2002b",
250
+ "EN2002c",
251
+ "EN2002d",
252
+ "ES2004a",
253
+ "ES2004b",
254
+ "ES2004c",
255
+ "ES2004d",
256
+ "IS1009a",
257
+ "IS1009b",
258
+ "IS1009c",
259
+ "IS1009d",
260
+ "TS3003a",
261
+ "TS3003b",
262
+ "TS3003c",
263
+ "TS3003d",
264
+ ]
265
+
266
+ _SUBSETS = ("ihm",)
267
+
268
+ _BASE_DATA_URL = "https://huggingface.co/datasets/patrickvonplaten/ami-ihm-kaldi-chunked/resolve/main/"
269
+
270
+ _AUDIO_ARCHIVE_URL = _BASE_DATA_URL + "audio/{subset}/{split}/{_id}.tar.gz"
271
+
272
+ _ANNOTATIONS_ARCHIVE_URL = _BASE_DATA_URL + "annotations/{split}/text"
273
+
274
+ logger = datasets.utils.logging.get_logger(__name__)
275
+
276
+
277
+ class AMIConfig(datasets.BuilderConfig):
278
+ """BuilderConfig for AMI."""
279
+
280
+ def __init__(self, name, *args, **kwargs):
281
+ """BuilderConfig for AMI"""
282
+ super().__init__(name=name, *args, **kwargs)
283
+ if name not in {"dev", "test"}:
284
+ self.subsets_to_download = _SUBSETS[: _SUBSETS.index(name) + 1]
285
+ else:
286
+ self.subsets_to_download = (name,)
287
+
288
+
289
+ class AMI(datasets.GeneratorBasedBuilder):
290
+ """
291
+ GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
292
+ labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
293
+ and unsupervised training (this implementation contains only labelled data for now).
294
+ Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
295
+ and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
296
+ sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
297
+ for speech recognition training, and to filter out segments with low-quality transcription. For system training,
298
+ GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
299
+ For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
300
+ and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
301
+ are re-processed by professional human transcribers to ensure high transcription quality.
302
+ """
303
+
304
+ VERSION = datasets.Version("1.0.0")
305
+
306
+ BUILDER_CONFIGS = [
307
+ AMIConfig(name=subset) for subset in _SUBSETS
308
+ ]
309
+
310
+ DEFAULT_WRITER_BATCH_SIZE = 128
311
+
312
+ def _info(self):
313
+ features = datasets.Features(
314
+ {
315
+ "segment_id": datasets.Value("string"),
316
+ "audio_id": datasets.Value("string"),
317
+ "text": datasets.Value("string"),
318
+ "audio": datasets.Audio(sampling_rate=16_000),
319
+ "begin_time": datasets.Value("float32"),
320
+ "end_time": datasets.Value("float32"),
321
+ "microphone_id": datasets.Value("string"),
322
+ "speaker_id": datasets.Value("string"),
323
+ }
324
+ )
325
+ return datasets.DatasetInfo(
326
+ description=_DESCRIPTION,
327
+ features=features,
328
+ homepage=_HOMEPAGE,
329
+ license=_LICENSE,
330
+ citation=_CITATION,
331
+ )
332
+
333
+ def _split_generators(self, dl_manager):
334
+ train_audio_files = [_AUDIO_ARCHIVE_URL.format(subset=self.name, split="train", _id=m) for m in _TRAIN_SAMPLE_IDS]
335
+ dev_audio_files = [_AUDIO_ARCHIVE_URL.format(subset=self.name, split="dev", _id=m) for m in _VALIDATION_SAMPLE_IDS]
336
+ eval_audio_files = [_AUDIO_ARCHIVE_URL.format(subset=self.name, split="eval", _id=m) for m in _EVAL_SAMPLE_IDS]
337
+
338
+ train_audio_archives = dl_manager.download_and_extract(train_audio_files)
339
+ dev_audio_archives = dl_manager.download_and_extract(dev_audio_files)
340
+ eval_audio_archives = dl_manager.download_and_extract(eval_audio_files)
341
+
342
+ train_annotation = dl_manager.download_and_extract(_ANNOTATIONS_ARCHIVE_URL.format(split="train"))
343
+ dev_annotation = dl_manager.download_and_extract(_ANNOTATIONS_ARCHIVE_URL.format(split="dev"))
344
+ eval_annotation = dl_manager.download_and_extract(_ANNOTATIONS_ARCHIVE_URL.format(split="eval"))
345
+
346
+ import ipdb; ipdb.set_trace()
347
+
348
+ return [
349
+ datasets.SplitGenerator(
350
+ name=datasets.Split.TRAIN,
351
+ gen_kwargs={"audio": train_audio_archives, "annotation": train_annotation},
352
+ ),
353
+ datasets.SplitGenerator(
354
+ name=datasets.Split.VALIDATION,
355
+ gen_kwargs={"audio": dev_audio_archives, "annotation": dev_annotation},
356
+ ),
357
+ datasets.SplitGenerator(
358
+ name=datasets.Split.TEST,
359
+ gen_kwargs={"audio": eval_audio_archives, "annotation": eval_annotation},
360
+ ),
361
+ ]
362
+
363
+ def _generate_examples(self, audio, annotation):
364
+ import ipdb; ipdb.set_trace()
365
+ # assert len(audio_archives_iterators) == len(meta_paths)
366
+ # if local_audio_archives_paths:
367
+ # assert len(audio_archives_iterators) == len(local_audio_archives_paths)
368
+ #
369
+ # for i, (meta_path, audio_archive_iterator) in enumerate(
370
+ # zip(meta_paths, audio_archives_iterators)
371
+ # ):
372
+ # meta_dict = dict()
373
+ # with open(meta_path) as csvfile:
374
+ # meta_csv = csv.DictReader(csvfile)
375
+ # for line in meta_csv:
376
+ # meta_dict[line["sid"]] = line
377
+ #
378
+ # for audio_path_in_archive, audio_file in audio_archive_iterator:
379
+ # `audio_path_in_archive` is like "dev_chunks_0000/YOU1000000029_S0000095.wav"
380
+ # audio_filename = os.path.split(audio_path_in_archive)[1]
381
+ # audio_id = audio_filename.split(".wav")[0]
382
+ # audio_meta = meta_dict[audio_id]
383
+ # audio_meta["segment_id"] = audio_meta.pop("sid")
384
+ # audio_meta["original_full_path"] = audio_meta.pop("path")
385
+ # audio_meta["text"] = audio_meta.pop("text_tn")
386
+ # audio_meta["audio_id"] = audio_meta.pop("aid")
387
+ # if not audio_meta["category"]:
388
+ # audio_meta["category"] = "N/A"
389
+ #
390
+ # path = (
391
+ # os.path.join(local_audio_archives_paths[i], audio_path_in_archive)
392
+ # if local_audio_archives_paths
393
+ # else audio_path_in_archive
394
+ # )
395
+
396
+ # yield audio_id, {
397
+ # "audio": {"path": path, "bytes": audio_file.read()},
398
+ # **{
399
+ # feature: value
400
+ # for feature, value in audio_meta.items()
401
+ # if feature in self.info.features
402
+ # },
403
+ # }
audio/{dev β†’ ihm/dev}/ES2011a.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/ES2011b.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/ES2011c.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/ES2011d.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IB4001.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IB4002.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IB4003.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IB4004.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IB4010.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IB4011.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IS1008a.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IS1008b.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IS1008c.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/IS1008d.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/TS3004a.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/TS3004b.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/TS3004c.tar.gz RENAMED
File without changes
audio/{dev β†’ ihm/dev}/TS3004d.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/EN2002a.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/EN2002b.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/EN2002c.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/EN2002d.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/ES2004a.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/ES2004b.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/ES2004c.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/ES2004d.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/IS1009a.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/IS1009b.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/IS1009c.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/IS1009d.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/TS3003a.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/TS3003b.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/TS3003c.tar.gz RENAMED
File without changes
audio/{eval β†’ ihm/eval}/TS3003d.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2001a.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2001b.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2001d.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2001e.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2003a.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2004a.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2005a.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2006a.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2006b.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2009b.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2009c.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/EN2009d.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/ES2002a.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/ES2002b.tar.gz RENAMED
File without changes
audio/{train β†’ ihm/train}/ES2002c.tar.gz RENAMED
File without changes