Dataset Preview Go to dataset viewer
The dataset preview is not available for this dataset.
Cannot get the split names for the dataset.
Error code:   SplitsNamesError
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 388, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 189, in _split_generators
                  pa_metadata_table = self._read_metadata(downloaded_metadata_file)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 257, in _read_metadata
                  return pa.Table.from_pandas(pd.read_csv(metadata_file))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 69, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 727, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/util/_decorators.py", line 211, in wrapper
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/util/_decorators.py", line 331, in wrapper
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 950, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 611, in _read
                  return parser.read(nrows)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1778, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 230, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "pandas/_libs/parsers.pyx", line 808, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 866, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 852, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 1973, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 3 fields in line 10, saw 4
              
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 120, in compute_splits_response
                  split_items = get_dataset_split_full_names(dataset=dataset, use_auth_token=use_auth_token)
                File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 76, in get_dataset_split_full_names
                  return [
                File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 79, in <listcomp>
                  for split in get_dataset_split_names(path=dataset, config_name=config, use_auth_token=use_auth_token)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 442, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 393, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open an discussion for direct support.

This custom multilingual-multispeaker TTS speech corpus contains 12.800 balanced samples with audio files (wav format sampled with 16000 Hz) and related transcriptions (csv format with two columns) from 18 speakers. The dataset has been assembled from the following sources:

  • VCTK : 428 + 426 + 426 english male samples (p259, p274, p286) (CC BY 4.0)
  • LJSpeech : 1280 english female samples (public domain)
  • m-ailabs : 1280 french male samples (public free licence)
  • SIWIS : 1024 french female samples (CC BY 4.0)
  • Rhasspy : 1082 german female samples (CC0 1.0)
  • Thorsten : 1280 german male samples (CC0)
  • TTS-Portuguese-Corpus : 2560 portuguese male samples (CC BY 4.0)
  • Marylux : 663 luxembourgish & 198 german & 256 french female samples (CC BY-NC-SA 4.0)
  • uni.lu : 409 luxembourgish female & 231 luxembourgish male samples (© uni.lu)
  • rtl.lu : 1257 luxembourgish male samples (© RTL-CLT-UFA)
  • Charel : 11 luxembourgish boy samples from my grandchild

The dataset has been manually checked and the transcriptions have been expanded and eventually corrected to comply with the audio files. The data structure is equivalent to the mailabs format. The folder nesting is shown below:

mailabs
   language-1
      by_book
         female 
            speaker-1
               wavs/ folder
               metadata.csv
               metadata-train.csv
               metadata-eval.csv
            speaker-2
               wavs/ folder
               metadata.csv
               metadata-train.csv
               metadata-eval.csv
            ...
         male
            speaker-1   
               wavs/ folder
               metadata.csv
               metadata-train.csv
               metadata-eval.csv
            speaker-2
               wavs/ folder
               metadata.csv
               metadata-train.csv
               metadata-eval.csv
            ...               
   language-2
      by_book
         ...
   language-3
      by_book
         ...
   ...                                  

Thanks to RTL and to the University of Luxembourg for permission to use and share selected copyrighted data.

Downloads last month
2

Models trained or fine-tuned on mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS