mpenagar commited on
Commit
a2fbcaf
1 Parent(s): e3c318d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -2
README.md CHANGED
@@ -5,5 +5,100 @@ task_categories:
5
  language:
6
  - es
7
  - eu
8
- pretty_name: Basque Parliament Corpus 1.0
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  language:
6
  - es
7
  - eu
8
+ pretty_name: Basque Parliament Speech Corpus 1.0
9
+ ---
10
+
11
+ # Dataset Card for Basque Parliament Speech Corpus 1.0
12
+
13
+ ## Table of Contents
14
+ - [Dataset Description](#dataset-description)
15
+ - [Dataset Summary](#dataset-summary)
16
+ - [Supported Tasks](#supported-tasks)
17
+ - [Languages](#languages)
18
+
19
+ ## Dataset Description
20
+
21
+ - **Repository:** https://huggingface.co/datasets/gttsehu/basque_parliament_1
22
+ - **Paper:** https://arxiv.org/
23
+ - **Contact:** [Luis J. Rodriguez-Fuentes](mailto:luisjavier.rodriguez@ehu.eus)
24
+
25
+ ### Dataset Summary
26
+
27
+ The Basque Parliament Speech Corpus 1.0 consists of 1462 hours of speech extracted from
28
+ Basque Parliament plenary sessions from 2013 to 2022. Encoded as MP3 files, the dataset
29
+ contains 759192 transcribed segments either spoken in Basque, Spanish or both (in
30
+ Basque and Spanish).
31
+
32
+ The corpus was created to help the development of speech technology for the Basque
33
+ language, which is relatively low-resourced. However, the dataset is suited to the
34
+ development of bilingual ASR systems, meaning to decode speech signals in Basque and/or
35
+ Spanish. Given the similarity between Basque and Spanish at the phonetic/phonological
36
+ level, acoustic models can be shared by both languages, which comes to circumvent
37
+ the lack of training data for Basque.
38
+
39
+ The dataset contains of four splits: `train`, `train_clean`, `dev` and `test`, all of
40
+ them containing 3-10 second long speech segments and their corresponding transcriptions.
41
+ Besides the transcription, each segment includes a speaker identifier and a language tag
42
+ (Spanish, Basque or bilingual).
43
+
44
+ The `train` split, aimed at estimating acoustic models, was extracted from 2013-2021
45
+ sessions, amounting to 1445 hours of speech. The `train_clean` split is a subset of
46
+ the `train` split, containing only highly reliable transcriptions. The `dev` and `test`
47
+ splits, amounting to 7.6 and 9.6 hours of speech respectively, were extracted from
48
+ February 2022 sessions and their transcripts were manually audited.
49
+
50
+ ### Languages
51
+
52
+ The dataset contains segments either spoken in Basque (`eu`), Spanish (`es`) or both (`bi`).
53
+ The language distribution is strongly biased towards Spanish and bilingual segments are
54
+ very unfrequent.
55
+
56
+ Duration (in hours) disaggregated per language:
57
+
58
+ | **Split** | **es** | **eu** | **bi** | **Total** |
59
+ |------------:|-------:|-------:|-------:|----------:|
60
+ | train | 1018.6 | 409.5 | 17.0 | 1445.1 |
61
+ | train_clean | 937.7 | 363.6 | 14.2 | 1315.5 |
62
+ | dev | 4.7 | 2.6 | 0.3 | 7.6 |
63
+ | test | 6.4 | 2.8 | 0.4 | 9.6 |
64
+
65
+ Number of segments disaggregated per language:
66
+
67
+ | **Split** | **es** | **eu** | **bi** | **Total** |
68
+ |------------:|-------:|-------:|-------:|----------:|
69
+ | train | 524942 | 216201 | 8802 | 749945 |
70
+ | train_clean | 469937 | 184950 | 6984 | 661871 |
71
+ | dev | 2567 | 1397 | 131 | 4095 |
72
+ | test | 3450 | 1521 | 181 | 5152 |
73
+
74
+ The dataset contains four configs that can be used to select the full set of multilingual
75
+ segments or just a subset of them, constrained to a single language:
76
+
77
+ * `all` : all the segments
78
+ * `es` : only the Spanish segments
79
+ * `eu` : only the Basque segments
80
+ * `bi` : only the bilingual segments
81
+
82
+ ## How to use
83
+
84
+
85
+ You can use the `datasets` library to load the dataset from Python. The dataset can be
86
+ downloaded in one call to your local drive by using the `load_dataset` function. For
87
+ example, to download the Basque config of the `train` split, simply specify the
88
+ desired language config name (i.e., "eu" for Basque) and the split:
89
+
90
+ ```python
91
+ from datasets import load_dataset
92
+
93
+ ds = load_dataset("gttsehu/basque_parliament_1", "eu", split="train")
94
+ ```
95
+
96
+ The default config is `all` and if no split is indicated all splits are prepared, so
97
+ the next code prepares the full dataset:
98
+
99
+ ```python
100
+ from datasets import load_dataset
101
+
102
+ ds = load_dataset("gttsehu/basque_parliament_1")
103
+ ```
104
+