parquet-converter commited on
Commit
650e904
·
1 Parent(s): 2c60fe7

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,54 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
- *.model filter=lfs diff=lfs merge=lfs -text
14
- *.msgpack filter=lfs diff=lfs merge=lfs -text
15
- *.npy filter=lfs diff=lfs merge=lfs -text
16
- *.npz filter=lfs diff=lfs merge=lfs -text
17
- *.onnx filter=lfs diff=lfs merge=lfs -text
18
- *.ot filter=lfs diff=lfs merge=lfs -text
19
- *.parquet filter=lfs diff=lfs merge=lfs -text
20
- *.pb filter=lfs diff=lfs merge=lfs -text
21
- *.pickle filter=lfs diff=lfs merge=lfs -text
22
- *.pkl filter=lfs diff=lfs merge=lfs -text
23
- *.pt filter=lfs diff=lfs merge=lfs -text
24
- *.pth filter=lfs diff=lfs merge=lfs -text
25
- *.rar filter=lfs diff=lfs merge=lfs -text
26
- *.safetensors filter=lfs diff=lfs merge=lfs -text
27
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
- *.tar.* filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- # Audio files - uncompressed
37
- *.pcm filter=lfs diff=lfs merge=lfs -text
38
- *.sam filter=lfs diff=lfs merge=lfs -text
39
- *.raw filter=lfs diff=lfs merge=lfs -text
40
- # Audio files - compressed
41
- *.aac filter=lfs diff=lfs merge=lfs -text
42
- *.flac filter=lfs diff=lfs merge=lfs -text
43
- *.mp3 filter=lfs diff=lfs merge=lfs -text
44
- *.ogg filter=lfs diff=lfs merge=lfs -text
45
- *.wav filter=lfs diff=lfs merge=lfs -text
46
- # Image files - uncompressed
47
- *.bmp filter=lfs diff=lfs merge=lfs -text
48
- *.gif filter=lfs diff=lfs merge=lfs -text
49
- *.png filter=lfs diff=lfs merge=lfs -text
50
- *.tiff filter=lfs diff=lfs merge=lfs -text
51
- # Image files - compressed
52
- *.jpg filter=lfs diff=lfs merge=lfs -text
53
- *.jpeg filter=lfs diff=lfs merge=lfs -text
54
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/test-00000-of-00001-112f39d2f116a22b.parquet → Jzuluaga--atco2_corpus_1h/parquet-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:08087dad0de847015ae75430c59f17027b061539ce11895627c15f635ded2ad1
3
- size 113467762
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:376ef74499d3d6eeb511e12e51b9a4ccca7fe54bc40df1c1788b9058afd88a66
3
+ size 113473529
README.md DELETED
@@ -1,117 +0,0 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: string
6
- - name: audio
7
- dtype:
8
- audio:
9
- sampling_rate: 16000
10
- - name: text
11
- dtype: string
12
- - name: segment_start_time
13
- dtype: float32
14
- - name: segment_end_time
15
- dtype: float32
16
- - name: duration
17
- dtype: float32
18
- splits:
19
- - name: test
20
- num_bytes: 113872168.0
21
- num_examples: 871
22
- download_size: 113467762
23
- dataset_size: 113872168.0
24
- tags:
25
- - audio
26
- - automatic-speech-recognition
27
- - en-atc
28
- - en
29
- - noisy-speech-recognition
30
- - speech-recognition
31
- task_categories:
32
- - automatic-speech-recognition
33
- language:
34
- - en
35
- multilinguality:
36
- - monolingual
37
- ---
38
-
39
- # Dataset Card for ATCO2 test set corpus (1hr set)
40
-
41
- ## Table of Contents
42
- - [Dataset Description](#dataset-description)
43
- - [Dataset Summary](#dataset-summary)
44
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
45
- - [Languages and Other Details](#languages-and-other-details)
46
- - [Dataset Structure](#dataset-structure)
47
- - [Data Fields](#data-fields)
48
- - [Additional Information](#additional-information)
49
- - [Licensing Information](#licensing-information)
50
- - [Citation Information](#citation-information)
51
-
52
-
53
- ## Dataset Description
54
- - **Homepage:** [ATCO2 project homepage](https://www.atco2.org/)
55
- - **Repository:** [ATCO2 corpus](https://github.com/idiap/atco2-corpus)
56
- - **Paper:** [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054)
57
-
58
- ### Dataset Summary
59
-
60
- ATCO2 project aims at developing a unique platform allowing to collect, organize and pre-process air-traffic control (voice communication) data from air space. This project has received funding from the Clean Sky 2 Joint Undertaking (JU) under grant agreement No 864702. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and the Clean Sky 2 JU members other than the Union.
61
-
62
- The project collected the real-time voice communication between air-traffic controllers and pilots available either directly through publicly accessible radio frequency channels or indirectly from air-navigation service providers (ANSPs). In addition to the voice communication data, contextual information is available in a form of metadata (i.e. surveillance data). The dataset consists of two distinct packages:
63
-
64
- - A corpus of 5000+ hours (pseudo-transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English.
65
- - A corpus of 4 hours (transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English. This corpus has been transcribed with orthographic information in XML format with speaker noise information, SNR values and others. Read Less
66
- - A free sample of the 4 hours transcribed data is in [ATCO2 project homepage](https://www.atco2.org/data)
67
-
68
- ### Supported Tasks and Leaderboards
69
-
70
- - `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [Wav2Vec 2.0 LARGE mdel](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc-and-atcosim).
71
-
72
- ### Languages and other details
73
-
74
- The text and the recordings are in English. For more information see Table 3 and Table 4 of [ATCO2 corpus paper](https://arxiv.org/abs/2211.04054)
75
-
76
- ## Dataset Structure
77
-
78
- ### Data Fields
79
-
80
- - `id (string)`: a string of recording identifier for each example, corresponding to its.
81
- - `audio (audio)`: audio data for the given ID
82
- - `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
83
- - `segment_start_time (float32)`: segment start time (normally 0)
84
- - `segment_end_time (float32): segment end time
85
- - `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
86
-
87
- ## Additional Information
88
-
89
- ### Licensing Information
90
-
91
- The licensing status of the ATCO2-test-set-1h corpus is in the file **ATCO2-ASRdataset-v1_beta - End-User Data Agreement** in the data folder. Download the data in [ATCO2 project homepage](https://www.atco2.org/data)
92
-
93
- ### Citation Information
94
-
95
- Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
96
-
97
- ```
98
- @article{zuluaga2022how,
99
- title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
100
- author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
101
- journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
102
- year={2022}
103
- }
104
- @article{zuluaga2022bertraffic,
105
- title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
106
- author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
107
- journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
108
- year={2022}
109
- }
110
- @article{zuluaga2022atco2,
111
- title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
112
- author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
113
- journal={arXiv preprint arXiv:2211.04054},
114
- year={2022}
115
- }
116
- ```
117
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
atc_data_loader.py DELETED
@@ -1,282 +0,0 @@
1
- #!/usr/bin/env python3
2
- # -*- coding: utf-8 -*-
3
- #
4
- # SPDX-FileCopyrightText: Copyright © <2022> Idiap Research Institute <contact@idiap.ch>
5
- #
6
- # SPDX-FileContributor: Juan Zuluaga-Gomez <jzuluaga@idiap.ch>
7
- #
8
- # SPDX-License-Identifier: MIT-License
9
-
10
- """\
11
- Script for loading air traffic control (ATC) speech datasets for automatic speech recognition (ASR).
12
- This script has been designed for ATC datasets that are in Kaldi format
13
-
14
- Required files: text, wav.scp and segments files
15
-
16
- - Databases
17
- - Training:
18
- - ATCOSIM, LDC-ATCC or, UWB-ATCC corpora.
19
- - Testing:
20
- - ATCO2-test-set-1h or 4h, LDC-ATCC or, UWB-ATCC corpora.
21
- """
22
-
23
- import os
24
- import re
25
-
26
- import datasets
27
- import numpy as np
28
- import soundfile as sf
29
- from datasets.tasks import AutomaticSpeechRecognition
30
-
31
- _CITATION = """\
32
- @article{zuluaga2022atco2,
33
- title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
34
- author={Zuluaga-Gomez, Juan and Vesel{\'y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
35
- journal={arXiv preprint arXiv:2211.04054},
36
- year={2022}
37
- }
38
- @article{zuluaga2022does,
39
- title={How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
40
- author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and others},
41
- journal={2022 IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
42
- year={2022}
43
- }
44
- @article{zuluagabertraffic,
45
- title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications (submitted to @ SLT-2022)},
46
- author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
47
- journal={2022 IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
48
- year={2022}
49
- }
50
- """
51
-
52
- _DESCRIPTION = """\
53
- ATC speech DATASET. This DataLoader works with data in Kaldi format.
54
- - We use the following files: text, segments and wav.scp
55
- - text --> utt_id transcript
56
- - segments --> utt_id recording_id t_begin t_end
57
- - wav.scp --> recording_id /path/to/wav/
58
- The default dataset is from ATCO2 project, a 1-hour sample: https://www.replaywell.com/atco2/download/ATCO2-ASRdataset-v1_beta.tgz
59
- """
60
-
61
- _DATA_URL = "http://catalog.elra.info/en-us/repository/browse/ELRA-S0484/"
62
-
63
- _HOMEPAGE = "https://github.com/idiap/w2v2-air-traffic"
64
-
65
- logger = datasets.logging.get_logger(__name__)
66
-
67
- # Our models work with audio data at 16kHZ,
68
- _SAMPLING_RATE = int(16000)
69
-
70
-
71
- class ATCDataASRConfig(datasets.BuilderConfig):
72
- """BuilderConfig for air traffic control datasets."""
73
-
74
- def __init__(self, **kwargs):
75
- """
76
- Args:
77
- data_dir: `string`, the path to the folder containing the files required to read: json or wav.scp
78
- **kwargs: keyword arguments forwarded to super.
79
- """
80
- super(ATCDataASRConfig, self).__init__(**kwargs)
81
-
82
-
83
- class ATCDataASR(datasets.GeneratorBasedBuilder):
84
-
85
- DEFAULT_WRITER_BATCH_SIZE = 256
86
- DEFAULT_CONFIG_NAME = "all"
87
- BUILDER_CONFIGS = [
88
- # TRAIN, DEV AND TEST DATASETS
89
- ATCDataASRConfig(name="train", description="ATC train dataset."),
90
- ATCDataASRConfig(name="dev", description="ATC dev dataset."),
91
- ATCDataASRConfig(name="test", description="ATC test dataset."),
92
- # UNSUPERVISED DATASETS
93
- ATCDataASRConfig(name="unsupervised", description="ATC unsupervised dataset."),
94
- ]
95
-
96
- # provide some information about the Dataset we just gathered
97
- def _info(self):
98
- return datasets.DatasetInfo(
99
- description=_DESCRIPTION,
100
- features=datasets.Features(
101
- {
102
- "id": datasets.Value("string"),
103
- "file": datasets.Value("string"),
104
- "audio": datasets.features.Audio(sampling_rate=_SAMPLING_RATE),
105
- "text": datasets.Value("string"),
106
- "segment_start_time": datasets.Value("float"),
107
- "segment_end_time": datasets.Value("float"),
108
- "duration": datasets.Value("float"),
109
- }
110
- ),
111
- supervised_keys=("audio", "text"),
112
- homepage=_HOMEPAGE,
113
- citation=_CITATION,
114
- task_templates=[
115
- AutomaticSpeechRecognition(
116
- audio_column="audio", transcription_column="text"
117
- )
118
- ],
119
- )
120
-
121
- def _split_generators(self, dlmanager):
122
- """Returns SplitGenerators."""
123
-
124
- split = self.config.name
125
-
126
- # UNSUPERVISED set (used only for decoding)
127
- if "unsupervised" in split:
128
- split_name = datasets.Split.TEST
129
- elif "test" in split or "dev" in split or "dummy" in split:
130
- split_name = datasets.Split.TEST
131
- # The last option left is: Train set
132
- else:
133
- split_name = datasets.Split.TRAIN
134
-
135
- # you need to pass a data directory where the Kaldi folder is stored
136
- filepath = self.config.data_dir
137
-
138
- return [
139
- datasets.SplitGenerator(
140
- name=split_name,
141
- # These kwargs will be passed to _generate_examples
142
- gen_kwargs={
143
- "filepath": filepath,
144
- "split": split,
145
- },
146
- )
147
- ]
148
-
149
- def _generate_examples(self, filepath, split):
150
- """You need to pass a path with the kaldi data, the folder should have
151
- audio: wav.scp,
152
- transcripts: text,
153
- timing information: segments
154
- """
155
-
156
- logger.info("Generating examples located in: %s", filepath)
157
-
158
- text_file = os.path.join(filepath, "text")
159
- wavscp = os.path.join(filepath, "wav.scp")
160
- segments = os.path.join(filepath, "segments")
161
-
162
- id_ = ""
163
- text_dict, wav_dict = {}, {}
164
- segments_dict, utt2wav_id = {}, {}
165
-
166
- line = 0
167
- # get the text file
168
- with open(text_file) as text_f:
169
- for line in text_f:
170
- if len(line.split(" ")) > 1:
171
- id_, transcript = line.split(" ", maxsplit=1)
172
- transcript = _remove_special_characters(transcript)
173
- if len(transcript.split(" ")) == 0:
174
- continue
175
- if len(transcript) < 2:
176
- continue
177
- text_dict[id_] = transcript
178
- else: # line is empty
179
- # if unsupervised set, then it's normal. else, continue
180
- if not "test_unsup" in self.config.name:
181
- continue
182
- id_ = line.rstrip().split(" ")[0]
183
- text_dict[id_] = ""
184
-
185
- # get wav.scp and load data into memory
186
- with open(wavscp) as text_f:
187
- for line in text_f:
188
- if line:
189
- if len(line.split()) < 2:
190
- continue
191
- id_, wavpath = line.split(" ", maxsplit=1)
192
- # only selects the part that ends of wav, flac or sph
193
- wavpath = [
194
- x
195
- for x in wavpath.split(" ")
196
- if ".wav" in x or ".WAV" in x or ".flac" in x or ".sph" in x
197
- ][0].rstrip()
198
-
199
- # make the output
200
- segment, sampling_rate = sf.read(wavpath, dtype=np.int16)
201
- wav_dict[id_] = [wavpath.rstrip(), segment, sampling_rate]
202
-
203
- # get segments dictionary
204
- with open(segments) as text_f:
205
- for line in text_f:
206
- if line:
207
- if len(line.split()) < 4:
208
- continue
209
- id_, wavid_, start, end = line.rstrip().split(" ")
210
- segments_dict[id_] = start.rstrip(), end.rstrip()
211
- utt2wav_id[id_] = wavid_
212
-
213
- for rec_id, text in text_dict.items():
214
- if rec_id in utt2wav_id and rec_id in segments_dict:
215
-
216
- # get audio data from memory and the path of the file
217
- wavpath, segment, sampling_rate = wav_dict[utt2wav_id[rec_id]]
218
- # get timing information
219
- seg_start, seg_end = segments_dict[rec_id]
220
- seg_start, seg_end = float(seg_start), float(seg_end)
221
- duration = round((seg_end - seg_start), 3)
222
-
223
- # get the samples, bytes, already cropping by segment,
224
- samples = _extract_audio_segment(
225
- segment, sampling_rate, float(seg_start), float(seg_end)
226
- )
227
-
228
- # output data for given dataset
229
- example = {
230
- "audio": {
231
- "path": wavpath,
232
- "array": samples,
233
- "sampling_rate": sampling_rate,
234
- },
235
- "id": rec_id,
236
- "file": wavpath,
237
- "text": text,
238
- "segment_start_time": format(float(seg_start), ".3f"),
239
- "segment_end_time": format(float(seg_end), ".3f"),
240
- "duration": format(float(duration), ".3f"),
241
- }
242
-
243
- yield rec_id, example
244
-
245
-
246
- def _remove_special_characters(text):
247
- """Function to remove some special chars/symbols from the given transcript"""
248
-
249
- text = text.split(" ")
250
- # first remove words between [] and <>
251
- text = " ".join(
252
- [
253
- x
254
- for x in text
255
- if "[" not in x and "]" not in x and "<" not in x and ">" not in x
256
- ]
257
- )
258
-
259
- # regex with predifined symbols to ignore/remove,
260
- chars_to_ignore_regex2 = '[\{\[\]\<\>\/\,\?\.\!\u00AC\;\:"\\%\\\]|[0-9]'
261
-
262
- text = re.sub(chars_to_ignore_regex2, "", text).lower()
263
- sentence = text.replace("\u2013", "-")
264
- sentence = sentence.replace("\u2014", "-")
265
- sentence = sentence.replace("\u2018", "'")
266
- sentence = sentence.replace("\u201C", "")
267
- sentence = sentence.replace("\u201D", "")
268
- sentence = sentence.replace("ñ", "n")
269
- sentence = sentence.replace(" - ", " ")
270
- sentence = sentence.replace("-", "")
271
- sentence = sentence.replace("'", " ")
272
-
273
- return sentence.lower().rstrip()
274
-
275
-
276
- def _extract_audio_segment(segment, sampling_rate, start_sec, end_sec):
277
- """Extracts segment of audio samples (as an ndarray) from the given segment."""
278
- # The dataset only contains mono audio.
279
- start_sample = int(start_sec * sampling_rate)
280
- end_sample = min(int(end_sec * sampling_rate), segment.shape[0])
281
- samples = segment[start_sample:end_sample]
282
- return samples