bond005 commited on
Commit
2d1febc
1 Parent(s): 3fed945

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -20
README.md CHANGED
@@ -1,23 +1,136 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: audio
5
- dtype: audio
6
- - name: transcription
7
- dtype: string
8
- splits:
9
- - name: test
10
- num_bytes: 1298753395.436
11
- num_examples: 9994
12
- - name: train
13
- num_bytes: 1035865109.026
14
- num_examples: 7993
15
- - name: validation
16
- num_bytes: 104084274.0
17
- num_examples: 793
18
- download_size: 2272668046
19
- dataset_size: 2438702778.462
 
 
20
  ---
21
- # Dataset Card for "sberdevices_golos_10h_crowd"
22
 
23
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: Golos
3
+ annotations_creators:
4
+ - expert-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ - expert-generated
8
+ language:
9
+ - ru
10
+ license:
11
+ - other
12
+ multilinguality:
13
+ - monolingual
14
+ paperswithcode_id: golos
15
+ size_categories:
16
+ - 10K<n<100k
17
+ source_datasets:
18
+ - extended
19
+ task_categories:
20
+ - automatic-speech-recognition
21
+ - audio-classification
22
  ---
 
23
 
24
+
25
+ # Dataset Card for sberdevices_golos_10h_crowd
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+ ## Dataset Description
50
+ - **Homepage:** [Golos ASR corpus](https://www.openslr.org/114)
51
+ - **Repository:** [Golos dataset](https://github.com/sberdevices/golos)
52
+ - **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf)
53
+ - **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
54
+ - **Point of Contact:** [Nikolay Karpov](mailto:karpnv@gmail.com)
55
+ ### Dataset Summary
56
+ Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
57
+
58
+ Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
59
+
60
+ This dataset is a simpler version of the above mentioned Golos:
61
+
62
+ - it includes the crowd domain only (without any sound from the farfield domain);
63
+ - validation split is built on the 1-hour training subset;
64
+ - training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
65
+ - test split is a full original test split.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+ - `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
69
+ ### Languages
70
+ The audio is in Russian.
71
+ ## Dataset Structure
72
+ ### Data Instances
73
+ A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
74
+ ```
75
+ {'audio': {'path': None,
76
+ 'array': array([ 3.05175781e-05, 3.05175781e-05, 0.00000000e+00, ...,
77
+ -1.09863281e-03, -7.93457031e-04, -1.52587891e-04]), dtype=float64),
78
+ 'sampling_rate': 16000},
79
+ 'transcription': 'шестнадцатая часть сезона пять сериала лемони сникет тридцать три несчастья'}
80
+ ```
81
+ ### Data Fields
82
+ - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
83
+ - transcription: the transcription of the audio file.
84
+ ### Data Splits
85
+ This dataset is a simpler version of the original Golos:
86
+
87
+ - it includes the crowd domain only (without any sound from the farfield domain);
88
+ - validation split is built on the 1-hour training subset;
89
+ - training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
90
+ - test split is a full original test split.
91
+
92
+ | | Train | Validation | Test |
93
+ | ----- | ------ | ---------- | ----- |
94
+ | examples | 7993 | 793 | 9994 |
95
+ | hours | < 10h | < 1h | 11.2h |
96
+ ## Dataset Creation
97
+ ### Curation Rationale
98
+ [Needs More Information]
99
+ ### Source Data
100
+ #### Initial Data Collection and Normalization
101
+ [Needs More Information]
102
+ #### Who are the source language producers?
103
+ [Needs More Information]
104
+ ### Annotations
105
+ #### Annotation process
106
+ All recorded audio files were manually annotated on the crowd-sourcing platform.
107
+ #### Who are the annotators?
108
+ [Needs More Information]
109
+ ### Personal and Sensitive Information
110
+ The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
111
+ ## Considerations for Using the Data
112
+ ### Social Impact of Dataset
113
+ [More Information Needed]
114
+ ### Discussion of Biases
115
+ [More Information Needed]
116
+ ### Other Known Limitations
117
+ [Needs More Information]
118
+ ## Additional Information
119
+ ### Dataset Curators
120
+ The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
121
+ ### Licensing Information
122
+ [Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf)
123
+ ### Citation Information
124
+ ```
125
+ @misc{karpov2021golos,
126
+ author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor},
127
+ title = {Golos: Russian Dataset for Speech Research},
128
+ publisher = {arXiv},
129
+ year = {2021},
130
+ url = {https://arxiv.org/abs/2106.10161}
131
+ }
132
+
133
+ ```
134
+ ### Contributions
135
+ Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
136
+