Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -524,7 +524,7 @@ language:
|
|
524 |
|
525 |
[CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) [1] CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG). CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
|
526 |
|
527 |
-
The [original dataset](https://huggingface.co/datasets/ylacombe/cml-tts) has been [cleaned](https://huggingface.co/datasets/PHBJT/cml-tts-
|
528 |
|
529 |
This dataset was used alongside the [LibriTTS-R English dataset](https://huggingface.co/datasets/blabble-io/libritts_r) and the [Non English subset of MLS](https://huggingface.co/datasets/facebook/multilingual_librispeech) to train [Parler-TTS Multilingual [Mini v1.1](https://huggingface.co/ylacombe/p-m-e).
|
530 |
A training recipe is available in [the Parler-TTS library](https://github.com/huggingface/parler-tts).
|
@@ -549,12 +549,12 @@ Here is an example on how to oad the `clean` config with only the `train.clean.3
|
|
549 |
```py
|
550 |
from datasets import load_dataset
|
551 |
|
552 |
-
load_dataset("https://huggingface.co/datasets/PHBJT/cml-tts-
|
553 |
```
|
554 |
|
555 |
|
556 |
**Note:** This dataset doesn't actually keep track of the audio column of the original version. You can merge it back to the original dataset using [this script](https://github.com/huggingface/dataspeech/blob/main/scripts/merge_audio_to_metadata.py) from Parler-TTS or, even better, get inspiration from [the training script](https://github.com/huggingface/parler-tts/blob/main/training/run_parler_tts_training.py) of Parler-TTS, that efficiently process multiple annotated datasets.
|
557 |
-
You can find the original dataset [here](https://huggingface.co/datasets/PHBJT/cml-tts-
|
558 |
|
559 |
### Dataset Description
|
560 |
|
|
|
524 |
|
525 |
[CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) [1] CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG). CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
|
526 |
|
527 |
+
The [original dataset](https://huggingface.co/datasets/ylacombe/cml-tts) has been [cleaned](https://huggingface.co/datasets/PHBJT/cml-tts-filtered) by removing all rows with a Levenshtein score inferior to 0.9. In the `text_description` column, it provides natural language annotations on the characteristics of speakers and utterances, that have been generated using [the Data-Speech repository](https://github.com/huggingface/dataspeech).
|
528 |
|
529 |
This dataset was used alongside the [LibriTTS-R English dataset](https://huggingface.co/datasets/blabble-io/libritts_r) and the [Non English subset of MLS](https://huggingface.co/datasets/facebook/multilingual_librispeech) to train [Parler-TTS Multilingual [Mini v1.1](https://huggingface.co/ylacombe/p-m-e).
|
530 |
A training recipe is available in [the Parler-TTS library](https://github.com/huggingface/parler-tts).
|
|
|
549 |
```py
|
550 |
from datasets import load_dataset
|
551 |
|
552 |
+
load_dataset("https://huggingface.co/datasets/PHBJT/cml-tts-filtered", "french", split="train")
|
553 |
```
|
554 |
|
555 |
|
556 |
**Note:** This dataset doesn't actually keep track of the audio column of the original version. You can merge it back to the original dataset using [this script](https://github.com/huggingface/dataspeech/blob/main/scripts/merge_audio_to_metadata.py) from Parler-TTS or, even better, get inspiration from [the training script](https://github.com/huggingface/parler-tts/blob/main/training/run_parler_tts_training.py) of Parler-TTS, that efficiently process multiple annotated datasets.
|
557 |
+
You can find the original dataset [here](https://huggingface.co/datasets/PHBJT/cml-tts-filtered)
|
558 |
|
559 |
### Dataset Description
|
560 |
|