joelthe1 commited on
Commit
4b3db55
1 Parent(s): 442eb7b

Update Readme

Browse files
Files changed (1) hide show
  1. README.md +77 -159
README.md CHANGED
@@ -1,127 +1,96 @@
1
  ---
2
- pretty_name: 'Snow Mountain'
3
  language:
4
- - hi
5
- - bgc
6
- - kfs
7
- - dgo
8
- - bhd
9
- - gbk
10
- - xnr
11
- - kfx
12
- - mjl
13
- - kfo
14
- - bfz
15
  annotations_creators:
16
- - ?
17
  language_creators:
18
- - ?
19
- license: []
20
  multilinguality:
21
  - multilingual
22
- size_categories:
23
- -
24
  source_datasets:
25
  - Snow Mountain
26
- tags: []
27
  task_categories:
28
  - automatic-speech-recognition
 
29
  task_ids: []
30
  configs:
31
- - hi
32
- - bgc
33
  dataset_info:
34
- - config_name: hi
35
- features:
36
- - name: Unnamed
37
- dtype: int64
38
- - name: sentence
39
- dtype: string
40
- - name: path
41
- dtype: string
42
- splits:
43
- - name: train_500
44
- num_examples: 400
45
- - name: val_500
46
- num_examples: 100
47
- - name: train_1000
48
- num_examples: 800
49
- - name: val_1000
50
- num_examples: 200
51
- - name: test_common
52
- num_examples: 500
53
- dataset_size: 71.41 hrs
54
- - config_name: bgc
55
- features:
56
- - name: Unnamed
57
- dtype: int64
58
- - name: sentence
59
- dtype: string
60
- - name: path
61
- dtype: string
62
- splits:
63
- - name: train_500
64
- num_examples: 400
65
- - name: val_500
66
- num_examples: 100
67
- - name: train_1000
68
- num_examples: 800
69
- - name: val_1000
70
- num_examples: 200
71
- - name: test_common
72
- num_examples: 500
73
- dataset_size: 27.41 hrs
74
-
75
  ---
76
 
77
- # Dataset Card for [snow-mountain]
78
-
79
- ## Table of Contents
80
- - [Table of Contents](#table-of-contents)
81
- - [Dataset Description](#dataset-description)
82
- - [Dataset Summary](#dataset-summary)
83
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
84
- - [Languages](#languages)
85
- - [Dataset Structure](#dataset-structure)
86
- - [Data Instances](#data-instances)
87
- - [Data Fields](#data-fields)
88
- - [Data Splits](#data-splits)
89
- - [Dataset Creation](#dataset-creation)
90
- - [Curation Rationale](#curation-rationale)
91
- - [Source Data](#source-data)
92
- - [Annotations](#annotations)
93
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
94
- - [Considerations for Using the Data](#considerations-for-using-the-data)
95
- - [Social Impact of Dataset](#social-impact-of-dataset)
96
- - [Discussion of Biases](#discussion-of-biases)
97
- - [Other Known Limitations](#other-known-limitations)
98
- - [Additional Information](#additional-information)
99
- - [Dataset Curators](#dataset-curators)
100
- - [Licensing Information](#licensing-information)
101
- - [Citation Information](#citation-information)
102
- - [Contributions](#contributions)
103
 
104
  ## Dataset Description
105
 
106
  - **Homepage:**
107
- - **Repository:https://gitlabdev.bridgeconn.com/software/research/datasets/snow-mountain**
108
  - **Paper:https://arxiv.org/abs/2206.01205**
109
- - **Leaderboard:**
110
- - **Point of Contact:**
111
 
112
  ### Dataset Summary
113
 
114
- The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription.
115
 
116
  We have used this dataset for experiments in ASR tasks. But these could be used for other applications in speech domain, like speaker recognition, language identification or even as unlabelled corpus for pre-training.
117
 
118
  ### Supported Tasks and Leaderboards
119
 
120
- Atomatic speech recognition, Speaker recognition, Language identification
121
 
122
  ### Languages
123
 
124
- Hindi, Haryanvi, Bilaspuri, Dogri, Bhadrawahi, Gaddi, Kangri, Kulvi, Mandeali, Kulvi Outer Seraji, Pahari Mahasui, Malayalam,Kannada, Tamil, Telugu
125
 
126
  ## Dataset Structure
127
  ```
@@ -164,7 +133,7 @@ data
164
  ```
165
  ### Data Instances
166
 
167
- A typical data point comprises the path to the audio file, usually called path and its transcription, called sentence.
168
 
169
  ```
170
  {'sentence': 'क्यूँके तू अपणी बात्तां कै कारण बेकसूर अर अपणी बात्तां ए कै कारण कसूरवार ठहराया जावैगा',
@@ -176,73 +145,24 @@ A typical data point comprises the path to the audio file, usually called path a
176
 
177
  ### Data Fields
178
 
179
- path: The path to the audio file
180
 
181
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
182
 
183
- sentence: the transcription of the audio file.
184
 
185
  ### Data Splits
186
 
187
- We create splits of the cleaned data for training and analysing the performance of ASR models. The splits are available in the `experiments` directory. The file names indicate the experiment and the split category. Additionally two csv files are included in the data splits - all_verses and short_verses. Various data splits were generated from these main two csvs. short_verses.csv contains audios of length < 10s and corresponding transcriptions where all_verses.csv contains complete cleaned verses including long and short audios. Due to the large size (>10MB), keeping these csvs as tar in `cleaned` folder.
188
 
189
  ## Dataset Loading
190
- `raw` folder has chapter wise audios in mp3 format. For doing experiments, we might need audios in wav format. Verse wise audios are keeping in `cleaned` folder in wav format. So our dataset size is much higher and hence loading might take some time. Here is the approximate time needed for laoding the Dataset.
191
- - Hindi (OT books) ~20 minutes
192
- - Hindi minority languages (NT books) ~9 minutes
193
- - Dravidian languages (OT+NT books) ~30 minutes
194
-
195
- ## Dataset Creation
196
-
197
- ### Curation Rationale
198
-
199
- [More Information Needed]
200
-
201
- ### Source Data
202
-
203
- The Bible recordings were done in a studio setting by native speakers.
204
-
205
- #### Initial Data Collection and Normalization
206
-
207
- [More Information Needed]
208
-
209
- #### Who are the source language producers?
210
-
211
- [More Information Needed]
212
-
213
- ### Annotations
214
-
215
- #### Annotation process
216
-
217
- [More Information Needed]
218
-
219
- #### Who are the annotators?
220
-
221
- [More Information Needed]
222
-
223
- ### Personal and Sensitive Information
224
 
225
- [More Information Needed]
226
-
227
- ## Considerations for Using the Data
228
-
229
- ### Social Impact of Dataset
230
-
231
- [More Information Needed]
232
-
233
- ### Discussion of Biases
234
-
235
- [More Information Needed]
236
-
237
- ### Other Known Limitations
238
-
239
- [More Information Needed]
240
-
241
- ## Additional Information
242
-
243
- ### Dataset Curators
244
-
245
- [More Information Needed]
246
 
247
  ### Licensing Information
248
 
@@ -251,14 +171,12 @@ The data is licensed under the Creative Commons Attribution-ShareAlike 4.0 Inter
251
 
252
  ### Citation Information
253
 
 
 
 
254
  @inproceedings{Raju2022SnowMD,
255
  title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages},
256
  author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew},
257
  year={2022}
258
  }
259
-
260
-
261
-
262
- ### Contributions
263
-
264
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
1
  ---
2
+ pretty_name: Snow Mountain
3
  language:
4
+ - hi
5
+ - bgc
6
+ - kfs
7
+ - dgo
8
+ - bhd
9
+ - gbk
10
+ - xnr
11
+ - kfx
12
+ - mjl
13
+ - kfo
14
+ - bfz
15
  annotations_creators:
16
+ - 'null': null
17
  language_creators:
18
+ - 'null': null
 
19
  multilinguality:
20
  - multilingual
 
 
21
  source_datasets:
22
  - Snow Mountain
 
23
  task_categories:
24
  - automatic-speech-recognition
25
+ - text-to-speech
26
  task_ids: []
27
  configs:
28
+ - hi
29
+ - bgc
30
  dataset_info:
31
+ - config_name: hi
32
+ features:
33
+ - name: Unnamed
34
+ dtype: int64
35
+ - name: sentence
36
+ dtype: string
37
+ - name: path
38
+ dtype: string
39
+ splits:
40
+ - name: train_500
41
+ num_examples: 400
42
+ - name: val_500
43
+ num_examples: 100
44
+ - name: train_1000
45
+ num_examples: 800
46
+ - name: val_1000
47
+ num_examples: 200
48
+ - name: test_common
49
+ num_examples: 500
50
+ dataset_size: 71.41 hrs
51
+ - config_name: bgc
52
+ features:
53
+ - name: Unnamed
54
+ dtype: int64
55
+ - name: sentence
56
+ dtype: string
57
+ - name: path
58
+ dtype: string
59
+ splits:
60
+ - name: train_500
61
+ num_examples: 400
62
+ - name: val_500
63
+ num_examples: 100
64
+ - name: train_1000
65
+ num_examples: 800
66
+ - name: val_1000
67
+ num_examples: 200
68
+ - name: test_common
69
+ num_examples: 500
70
+ dataset_size: 27.41 hrs
 
71
  ---
72
 
73
+ # snow-mountain
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
  ## Dataset Description
76
 
77
  - **Homepage:**
 
78
  - **Paper:https://arxiv.org/abs/2206.01205**
79
+ - **Point of Contact:Joel Mathew**
 
80
 
81
  ### Dataset Summary
82
 
83
+ The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible (contains both Old Testament (OT) and New Testament (NT)) in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription.
84
 
85
  We have used this dataset for experiments in ASR tasks. But these could be used for other applications in speech domain, like speaker recognition, language identification or even as unlabelled corpus for pre-training.
86
 
87
  ### Supported Tasks and Leaderboards
88
 
89
+ Atomatic speech recognition, Speech-to-Text, Speaker recognition, Language identification
90
 
91
  ### Languages
92
 
93
+ Hindi, Haryanvi, Bilaspuri, Dogri, Bhadrawahi, Gaddi, Kangri, Kulvi, Mandeali, Kulvi Outer Seraji, Pahari Mahasui, Malayalam, Kannada, Tamil, Telugu
94
 
95
  ## Dataset Structure
96
  ```
 
133
  ```
134
  ### Data Instances
135
 
136
+ A data point comprises of the path to the audio file, called `path` and its transcription, called `sentence`.
137
 
138
  ```
139
  {'sentence': 'क्यूँके तू अपणी बात्तां कै कारण बेकसूर अर अपणी बात्तां ए कै कारण कसूरवार ठहराया जावैगा',
 
145
 
146
  ### Data Fields
147
 
148
+ `path`: The path to the audio file
149
 
150
+ `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
151
 
152
+ `sentence`: The transcription of the audio file.
153
 
154
  ### Data Splits
155
 
156
+ We create splits of the cleaned data for training and analysing the performance of ASR models. The splits are available in the `experiments` directory. The file names indicate the experiment and the split category. Additionally two CSV files are included in the data splits - `all_verses` and `short_verses`. Various data splits were generated from these main two CSVs. `short_verses.csv` contains audios of length < 10s and corresponding transcriptions. `all_verses.csv` contains complete cleaned verses including long and short audios. Due to the large size (>10MB), we keep these CSVs compressed in the `tar.gz format in the `cleaned` folder.
157
 
158
  ## Dataset Loading
159
+ `raw` folder has chapter wise audios in .mp3 format. For doing experiments, we might need audios in .wav format. Verse wise audio files are keept in the `cleaned` folder in .wav format. This results in a much larger size which contributes to longer loading time into memory. Here is the approximate time needed for loading the Dataset.
160
+ - Hindi (OT books): ~20 minutes
161
+ - Hindi minority languages (NT books): ~9 minutes
162
+ - Dravidian languages (OT+NT books): ~30 minutes
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
 
164
+ ## Details
165
+ Please refer to the paper for more details on the creation and the rationale for the splits we created in the dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
166
 
167
  ### Licensing Information
168
 
 
171
 
172
  ### Citation Information
173
 
174
+ Please cite this work if you make use of it:
175
+
176
+ ```
177
  @inproceedings{Raju2022SnowMD,
178
  title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages},
179
  author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew},
180
  year={2022}
181
  }
182
+ ```