basque_parliament_1 / README.md
gttsehu's picture
Update README.md
46208b3 verified
---
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- es
- eu
pretty_name: Basque Parliament Speech Corpus 1.0
---
# Dataset Card for Basque Parliament Speech Corpus 1.0
This work was partially funded by the Spanish Ministry of Science and Innovation (OPENSPEECH
project, PID2019-106424RB-I00).
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
## Dataset Description
- **Repository:** https://huggingface.co/datasets/gttsehu/basque_parliament_1
- **Paper:** [10.3390/app14051951](https://doi.org/10.3390/app14051951)
- **Contact:** [Luis J. Rodriguez-Fuentes](mailto:luisjavier.rodriguez@ehu.eus)
- **Funding:** Spanish Ministry of Science and Innovation
### Dataset Summary
The Basque Parliament Speech Corpus 1.0 consists of 1462 hours of speech extracted from
Basque Parliament plenary sessions from 2013 to 2022. Encoded as MP3 files, the dataset
contains 759192 transcribed segments either spoken in Basque, Spanish or both (in
Basque and Spanish).
The corpus was created to help the development of speech technology for the Basque
language, which is relatively low-resourced. However, the dataset is suited to the
development of bilingual ASR systems, meaning to decode speech signals in Basque and/or
Spanish. Given the similarity between Basque and Spanish at the phonetic/phonological
level, acoustic models can be shared by both languages, which comes to circumvent
the lack of training data for Basque.
The dataset contains of four splits: `train`, `train_clean`, `dev` and `test`, all of
them containing 3-10 second long speech segments and their corresponding transcriptions.
Besides the transcription, each segment includes a speaker identifier and a language tag
(Spanish, Basque or bilingual).
The `train` split, aimed at estimating acoustic models, was extracted from 2013-2021
sessions, amounting to 1445 hours of speech. The `train_clean` split is a subset of
the `train` split, containing only highly reliable transcriptions. The `dev` and `test`
splits, amounting to 7.6 and 9.6 hours of speech respectively, were extracted from
February 2022 sessions and their transcripts were manually audited.
### Languages
The dataset contains segments either spoken in Basque (`eu`), Spanish (`es`) or both (`bi`).
The language distribution is strongly biased towards Spanish and bilingual segments are
very unfrequent.
Duration (in hours) disaggregated per language:
| **Split** | **es** | **eu** | **bi** | **Total** |
|------------:|-------:|-------:|-------:|----------:|
| train | 1018.6 | 409.5 | 17.0 | 1445.1 |
| train_clean | 937.7 | 363.6 | 14.2 | 1315.5 |
| dev | 4.7 | 2.6 | 0.3 | 7.6 |
| test | 6.4 | 2.8 | 0.4 | 9.6 |
Number of segments disaggregated per language:
| **Split** | **es** | **eu** | **bi** | **Total** |
|------------:|-------:|-------:|-------:|----------:|
| train | 524942 | 216201 | 8802 | 749945 |
| train_clean | 469937 | 184950 | 6984 | 661871 |
| dev | 2567 | 1397 | 131 | 4095 |
| test | 3450 | 1521 | 181 | 5152 |
The dataset contains four configs that can be used to select the full set of multilingual
segments or just a subset of them, constrained to a single language:
* `all` : all the segments
* `es` : only the Spanish segments
* `eu` : only the Basque segments
* `bi` : only the bilingual segments
## How to use
You can use the `datasets` library to load the dataset from Python. The dataset can be
downloaded in one call to your local drive by using the `load_dataset` function. For
example, to download the Basque config of the `train` split, simply specify the
desired language config name (i.e., "eu" for Basque) and the split:
```python
from datasets import load_dataset
ds = load_dataset("gttsehu/basque_parliament_1", "eu", split="train")
```
The default config is `all` and if no split is indicated all splits are prepared, so
the next code prepares the full dataset:
```python
from datasets import load_dataset
ds = load_dataset("gttsehu/basque_parliament_1")
```