lhoestq HF staff commited on
Commit
0b15d04
1 Parent(s): 82cc17d

Redirect TIMIT download from LDC (#4145)

Browse files

* redirect TIMIT download to LDC

* mention manual download in the dataset card

Commit from https://github.com/huggingface/datasets/commit/1004f364fe6e85290889c32acd2b3463e785c5a3

README.md CHANGED
@@ -59,13 +59,22 @@ paperswithcode_id: timit
59
 
60
  The TIMIT corpus of read speech is designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems. TIMIT contains broadband recordings of 630 speakers of eight major dialects of American English, each reading ten phonetically rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic and word transcriptions as well as a 16-bit, 16kHz speech waveform file for each utterance. Corpus design was a joint effort among the Massachusetts Institute of Technology (MIT), SRI International (SRI) and Texas Instruments, Inc. (TI). The speech was recorded at TI, transcribed at MIT and verified and prepared for CD-ROM production by the National Institute of Standards and Technology (NIST).
61
 
 
 
 
 
 
 
 
 
 
62
  ### Supported Tasks and Leaderboards
63
 
64
  - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-timit and ranks models based on their WER.
65
 
66
  ### Languages
67
 
68
- The audio is in English.
69
  The TIMIT corpus transcriptions have been hand verified. Test and training subsets, balanced for phonetic and dialectal coverage, are specified. Tabular computer-searchable information is included as well as written documentation.
70
 
71
  ## Dataset Structure
 
59
 
60
  The TIMIT corpus of read speech is designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems. TIMIT contains broadband recordings of 630 speakers of eight major dialects of American English, each reading ten phonetically rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic and word transcriptions as well as a 16-bit, 16kHz speech waveform file for each utterance. Corpus design was a joint effort among the Massachusetts Institute of Technology (MIT), SRI International (SRI) and Texas Instruments, Inc. (TI). The speech was recorded at TI, transcribed at MIT and verified and prepared for CD-ROM production by the National Institute of Standards and Technology (NIST).
61
 
62
+ The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1:
63
+
64
+ ```
65
+ To use TIMIT you have to download it manually.
66
+ Please create an account and download the dataset from https://catalog.ldc.upenn.edu/LDC93S1
67
+ Then extract all files in one folder and load the dataset with:
68
+ `datasets.load_dataset('timit_asr', data_dir='path/to/folder/folder_name')`
69
+ ```
70
+
71
  ### Supported Tasks and Leaderboards
72
 
73
  - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-timit and ranks models based on their WER.
74
 
75
  ### Languages
76
 
77
+ The audio is in English.
78
  The TIMIT corpus transcriptions have been hand verified. Test and training subsets, balanced for phonetic and dialectal coverage, are specified. Tabular computer-searchable information is included as well as written documentation.
79
 
80
  ## Dataset Structure
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"clean": {"description": "The TIMIT corpus of reading speech has been developed to provide speech data for acoustic-phonetic research studies\nand for the evaluation of automatic speech recognition systems.\n\nTIMIT contains high quality recordings of 630 individuals/speakers with 8 different American English dialects,\nwith each individual reading upto 10 phonetically rich sentences.\n\nMore info on TIMIT dataset can be understood from the \"README\" which can be found here:\nhttps://catalog.ldc.upenn.edu/docs/LDC93S1/readme.txt\n", "citation": "@inproceedings{\n title={TIMIT Acoustic-Phonetic Continuous Speech Corpus},\n author={Garofolo, John S., et al},\n ldc_catalog_no={LDC93S1},\n DOI={https://doi.org/10.35111/17gk-bn40},\n journal={Linguistic Data Consortium, Philadelphia},\n year={1983}\n}\n", "homepage": "https://catalog.ldc.upenn.edu/LDC93S1", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "audio": {"sampling_rate": 16000, "mono": true, "decode": true, "id": null, "_type": "Audio"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "phonetic_detail": {"feature": {"start": {"dtype": "int64", "id": null, "_type": "Value"}, "stop": {"dtype": "int64", "id": null, "_type": "Value"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "word_detail": {"feature": {"start": {"dtype": "int64", "id": null, "_type": "Value"}, "stop": {"dtype": "int64", "id": null, "_type": "Value"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "dialect_region": {"dtype": "string", "id": null, "_type": "Value"}, "sentence_type": {"dtype": "string", "id": null, "_type": "Value"}, "speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_column": "audio", "transcription_column": "text"}], "builder_name": "timit_asr", "config_name": "clean", "version": {"version_str": "2.0.1", "description": "", "major": 2, "minor": 0, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 6076580, "num_examples": 4620, "dataset_name": "timit_asr"}, "test": {"name": "test", "num_bytes": 2202968, "num_examples": 1680, "dataset_name": "timit_asr"}}, "download_checksums": {"https://data.deepai.org/timit.zip": {"num_bytes": 869007403, "checksum": "b79af42068b53045510d86854e2239a13ff77c4bd27803b40c28dce4bb5aeb0d"}}, "download_size": 869007403, "post_processing_size": null, "dataset_size": 8279548, "size_in_bytes": 877286951}}
 
 
dummy/clean/2.0.1/dummy_data.zip CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1d415bf8a373d2b6304c0e866936e0e3df530fe6ee2b0308d5965dbf4f2b4fd7
3
- size 292805
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff277a8b6f09d34eceb60724a782412b8ff38465e8a60da2ce1eeaded644c60f
3
+ size 294704
timit_asr.py CHANGED
@@ -18,8 +18,7 @@
18
 
19
 
20
  import os
21
-
22
- import pandas as pd
23
 
24
  import datasets
25
  from datasets.tasks import AutomaticSpeechRecognition
@@ -47,7 +46,6 @@ More info on TIMIT dataset can be understood from the "README" which can be foun
47
  https://catalog.ldc.upenn.edu/docs/LDC93S1/readme.txt
48
  """
49
 
50
- _URL = "https://data.deepai.org/timit.zip"
51
  _HOMEPAGE = "https://catalog.ldc.upenn.edu/LDC93S1"
52
 
53
 
@@ -71,6 +69,15 @@ class TimitASR(datasets.GeneratorBasedBuilder):
71
 
72
  BUILDER_CONFIGS = [TimitASRConfig(name="clean", description="'Clean' speech.")]
73
 
 
 
 
 
 
 
 
 
 
74
  def _info(self):
75
  return datasets.DatasetInfo(
76
  description=_DESCRIPTION,
@@ -106,42 +113,30 @@ class TimitASR(datasets.GeneratorBasedBuilder):
106
  )
107
 
108
  def _split_generators(self, dl_manager):
109
- archive_path = dl_manager.download_and_extract(_URL)
110
 
111
- train_csv_path = os.path.join(archive_path, "train_data.csv")
112
- test_csv_path = os.path.join(archive_path, "test_data.csv")
 
 
 
 
113
 
114
  return [
115
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"data_info_csv": train_csv_path}),
116
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"data_info_csv": test_csv_path}),
117
  ]
118
 
119
- def _generate_examples(self, data_info_csv):
120
  """Generate examples from TIMIT archive_path based on the test/train csv information."""
121
- # Extract the archive path
122
- data_path = os.path.join(os.path.dirname(data_info_csv).strip(), "data")
123
-
124
- # Read the data info to extract rows mentioning about non-converted audio only
125
- data_info = pd.read_csv(open(data_info_csv, encoding="utf8"))
126
- # making sure that the columns having no information about the file paths are removed
127
- data_info.dropna(subset=["path_from_data_dir"], inplace=True)
128
-
129
- # filter out only the required information for data preparation
130
- data_info = data_info.loc[(data_info["is_audio"]) & (~data_info["is_converted_audio"])]
131
-
132
  # Iterating the contents of the data to extract the relevant information
133
- for audio_idx in range(data_info.shape[0]):
134
- audio_data = data_info.iloc[audio_idx]
135
-
136
- # extract the path to audio
137
- wav_path = os.path.join(data_path, *(audio_data["path_from_data_dir"].split("/")))
138
 
139
  # extract transcript
140
- with open(wav_path.replace(".WAV", ".TXT"), encoding="utf-8") as op:
141
  transcript = " ".join(op.readlines()[0].split()[2:]) # first two items are sample number
142
 
143
  # extract phonemes
144
- with open(wav_path.replace(".WAV", ".PHN"), encoding="utf-8") as op:
145
  phonemes = [
146
  {
147
  "start": i.split(" ")[0],
@@ -152,7 +147,7 @@ class TimitASR(datasets.GeneratorBasedBuilder):
152
  ]
153
 
154
  # extract words
155
- with open(wav_path.replace(".WAV", ".WRD"), encoding="utf-8") as op:
156
  words = [
157
  {
158
  "start": i.split(" ")[0],
@@ -162,16 +157,21 @@ class TimitASR(datasets.GeneratorBasedBuilder):
162
  for i in op.readlines()
163
  ]
164
 
 
 
 
 
 
165
  example = {
166
- "file": wav_path,
167
- "audio": wav_path,
168
  "text": transcript,
169
  "phonetic_detail": phonemes,
170
  "word_detail": words,
171
- "dialect_region": audio_data["dialect_region"],
172
- "sentence_type": audio_data["filename"][0:2],
173
- "speaker_id": audio_data["speaker_id"],
174
- "id": audio_data["filename"].replace(".WAV", ""),
175
  }
176
 
177
- yield audio_idx, example
 
18
 
19
 
20
  import os
21
+ from pathlib import Path
 
22
 
23
  import datasets
24
  from datasets.tasks import AutomaticSpeechRecognition
 
46
  https://catalog.ldc.upenn.edu/docs/LDC93S1/readme.txt
47
  """
48
 
 
49
  _HOMEPAGE = "https://catalog.ldc.upenn.edu/LDC93S1"
50
 
51
 
 
69
 
70
  BUILDER_CONFIGS = [TimitASRConfig(name="clean", description="'Clean' speech.")]
71
 
72
+ @property
73
+ def manual_download_instructions(self):
74
+ return (
75
+ "To use TIMIT you have to download it manually. "
76
+ "Please create an account and download the dataset from https://catalog.ldc.upenn.edu/LDC93S1 \n"
77
+ "Then extract all files in one folder and load the dataset with: "
78
+ "`datasets.load_dataset('timit_asr', data_dir='path/to/folder/folder_name')`"
79
+ )
80
+
81
  def _info(self):
82
  return datasets.DatasetInfo(
83
  description=_DESCRIPTION,
 
113
  )
114
 
115
  def _split_generators(self, dl_manager):
 
116
 
117
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
118
+
119
+ if not os.path.exists(data_dir):
120
+ raise FileNotFoundError(
121
+ f"{data_dir} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('timit_asr', data_dir=...)` that includes files unzipped from the TIMIT zip. Manual download instructions: {self.manual_download_instructions}"
122
+ )
123
 
124
  return [
125
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"split": "train", "data_dir": data_dir}),
126
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"split": "test", "data_dir": data_dir}),
127
  ]
128
 
129
+ def _generate_examples(self, split, data_dir):
130
  """Generate examples from TIMIT archive_path based on the test/train csv information."""
 
 
 
 
 
 
 
 
 
 
 
131
  # Iterating the contents of the data to extract the relevant information
132
+ for wav_path in sorted(Path(data_dir).glob(f"**/{split.upper()}/**/*.WAV")):
 
 
 
 
133
 
134
  # extract transcript
135
+ with open(wav_path.with_suffix(".TXT"), encoding="utf-8") as op:
136
  transcript = " ".join(op.readlines()[0].split()[2:]) # first two items are sample number
137
 
138
  # extract phonemes
139
+ with open(wav_path.with_suffix(".PHN"), encoding="utf-8") as op:
140
  phonemes = [
141
  {
142
  "start": i.split(" ")[0],
 
147
  ]
148
 
149
  # extract words
150
+ with open(wav_path.with_suffix(".WRD"), encoding="utf-8") as op:
151
  words = [
152
  {
153
  "start": i.split(" ")[0],
 
157
  for i in op.readlines()
158
  ]
159
 
160
+ dialect_region = wav_path.parents[1].name
161
+ sentence_type = wav_path.name[0:2]
162
+ speaker_id = wav_path.parents[0].name[1:]
163
+ id_ = wav_path.stem
164
+
165
  example = {
166
+ "file": str(wav_path),
167
+ "audio": str(wav_path),
168
  "text": transcript,
169
  "phonetic_detail": phonemes,
170
  "word_detail": words,
171
+ "dialect_region": dialect_region,
172
+ "sentence_type": sentence_type,
173
+ "speaker_id": speaker_id,
174
+ "id": id_,
175
  }
176
 
177
+ yield id_, example