The dataset viewer is not available for this dataset.
The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.
Error code:   DatasetWithScriptNotSupportedError

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for Basque Parliament Speech Corpus 1.0

Dataset Summary

The Basque Parliament Speech Corpus 1.0 consists of 1462 hours of speech extracted from Basque Parliament plenary sessions from 2013 to 2022. Encoded as MP3 files, the dataset contains 759192 transcribed segments either spoken in Basque, Spanish or both (in Basque and Spanish).

The corpus was created to help the development of speech technology for the Basque language, which is relatively low-resourced. However, the dataset is suited to the development of bilingual ASR systems, meaning to decode speech signals in Basque and/or Spanish. Given the similarity between Basque and Spanish at the phonetic/phonological level, acoustic models can be shared by both languages, which comes to circumvent the lack of training data for Basque.

The dataset contains of four splits: train, train_clean, dev and test, all of them containing 3-10 second long speech segments and their corresponding transcriptions. Besides the transcription, each segment includes a speaker identifier and a language tag (Spanish, Basque or bilingual).

The train split, aimed at estimating acoustic models, was extracted from 2013-2021 sessions, amounting to 1445 hours of speech. The train_clean split is a subset of the train split, containing only highly reliable transcriptions. The dev and test splits, amounting to 7.6 and 9.6 hours of speech respectively, were extracted from February 2022 sessions and their transcripts were manually audited.

Languages

The dataset contains segments either spoken in Basque (eu), Spanish (es) or both (bi). The language distribution is strongly biased towards Spanish and bilingual segments are very unfrequent.

Duration (in hours) disaggregated per language:

Split es eu bi Total
train 1018.6 409.5 17.0 1445.1
train_clean 937.7 363.6 14.2 1315.5
dev 4.7 2.6 0.3 7.6
test 6.4 2.8 0.4 9.6

Number of segments disaggregated per language:

Split es eu bi Total
train 524942 216201 8802 749945
train_clean 469937 184950 6984 661871
dev 2567 1397 131 4095
test 3450 1521 181 5152

The dataset contains four configs that can be used to select the full set of multilingual segments or just a subset of them, constrained to a single language:

  • all : all the segments
  • es : only the Spanish segments
  • eu : only the Basque segments
  • bi : only the bilingual segments

How to use

You can use the datasets library to load the dataset from Python. The dataset can be downloaded in one call to your local drive by using the load_dataset function. For example, to download the Basque config of the train split, simply specify the desired language config name (i.e., "eu" for Basque) and the split:

from datasets import load_dataset

ds = load_dataset("gttsehu/basque_parliament_1", "eu", split="train")

The default config is all and if no split is indicated all splits are prepared, so the next code prepares the full dataset:

from datasets import load_dataset

ds = load_dataset("gttsehu/basque_parliament_1")
Downloads last month
0
Edit dataset card

Models trained or fine-tuned on gttsehu/basque_parliament_1