The dataset viewer is not available for this subset.
Cannot get the split names for the config 'all' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 138, in compute
                  return CompleteJobResult(compute_split_names_from_info_response(dataset=self.dataset, config=self.config))
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 117, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
                  response.raise_for_status()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status
                  raise HTTPError(http_error_msg, response=self)
              requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/c8/9e/c89ed8e89771ec41dc5b507bf42799f4d5cdc7213e5065d5c3b8b13bda640ab7/9a28bb17d91a289abf81196d95d9bc2de644166d3e95eb81b5c6c4a51aa1c23f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20240411%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240411T064747Z&X-Amz-Expires=259200&X-Amz-Signature=1fe17373aa8fdff5d64117a79e4cef1aee34c2c849cf58cf3c2bedf8e6766040&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27dev.clean-00000-of-00004.parquet%3B%20filename%3D%22dev.clean-00000-of-00004.parquet%22%3B&x-id=GetObject
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 498, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 61, in _split_generators
                  self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 3687, in read_schema
                  file = ParquetFile(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 341, in __init__
                  self.reader.open(
                File "pyarrow/_parquet.pyx", line 1250, in pyarrow._parquet.ParquetReader.open
                File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries
                  out = read(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1856, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 189, in _fetch
                  self.cache = self.fetcher(start, end)  # new block replaces old
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 643, in _fetch_range
                  hf_raise_for_status(r)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 362, in hf_raise_for_status
                  raise HfHubHTTPError(str(e), response=response) from e
              huggingface_hub.utils._errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/c8/9e/c89ed8e89771ec41dc5b507bf42799f4d5cdc7213e5065d5c3b8b13bda640ab7/9a28bb17d91a289abf81196d95d9bc2de644166d3e95eb81b5c6c4a51aa1c23f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20240411%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240411T064747Z&X-Amz-Expires=259200&X-Amz-Signature=1fe17373aa8fdff5d64117a79e4cef1aee34c2c849cf58cf3c2bedf8e6766040&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27dev.clean-00000-of-00004.parquet%3B%20filename%3D%22dev.clean-00000-of-00004.parquet%22%3B&x-id=GetObject
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 68, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 571, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 503, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for LibriTTS-R

LibriTTS-R [1] is a sound quality improved version of the LibriTTS corpus (http://www.openslr.org/60/) which is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, published in 2019.

Overview

This is the LibriTTS-R dataset, adapted for the datasets library.

Usage

Splits

There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements):

  • dev.clean
  • dev.other
  • test.clean
  • test.other
  • train.clean.100
  • train.clean.360
  • train.other.500

Configurations

There are 3 configurations, each which limits the splits the load_dataset() function will download.

The default configuration is "all".

  • "dev": only the "dev.clean" split (good for testing the dataset quickly)
  • "clean": contains only "clean" splits
  • "other": contains only "other" splits
  • "all": contains only "all" splits

Example

Loading the clean config with only the train.clean.360 split.

load_dataset("blabble-io/libritts_r", "clean", split="train.clean.100")

Streaming is also supported.

load_dataset("blabble-io/libritts_r", streaming=True)

Columns

{
    "audio": datasets.Audio(sampling_rate=24_000),
    "text_normalized": datasets.Value("string"),
    "text_original": datasets.Value("string"),
    "speaker_id": datasets.Value("string"),
    "path": datasets.Value("string"),
    "chapter_id": datasets.Value("string"),
    "id": datasets.Value("string"),
}

Example Row

{
  'audio': {
    'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS_R/dev-clean/3081/166546/3081_166546_000028_000002.wav', 
    'array': ..., 
    'sampling_rate': 24000
  }, 
  'text_normalized': 'How quickly he disappeared!"',
  'text_original': 'How quickly he disappeared!"',
  'speaker_id': '3081', 
  'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS_R/dev-clean/3081/166546/3081_166546_000028_000002.wav', 
  'chapter_id': '166546', 
  'id': '3081_166546_000028_000002'
}

Dataset Details

Dataset Sources [optional]

Citation

@ARTICLE{Koizumi2023-hs,
  title         = "{LibriTTS-R}: A restored multi-speaker text-to-speech corpus",
  author        = "Koizumi, Yuma and Zen, Heiga and Karita, Shigeki and Ding,
                   Yifan and Yatabe, Kohei and Morioka, Nobuyuki and Bacchiani,
                   Michiel and Zhang, Yu and Han, Wei and Bapna, Ankur",
  abstract      = "This paper introduces a new speech dataset called
                   ``LibriTTS-R'' designed for text-to-speech (TTS) use. It is
                   derived by applying speech restoration to the LibriTTS
                   corpus, which consists of 585 hours of speech data at 24 kHz
                   sampling rate from 2,456 speakers and the corresponding
                   texts. The constituent samples of LibriTTS-R are identical
                   to those of LibriTTS, with only the sound quality improved.
                   Experimental results show that the LibriTTS-R ground-truth
                   samples showed significantly improved sound quality compared
                   to those in LibriTTS. In addition, neural end-to-end TTS
                   trained with LibriTTS-R achieved speech naturalness on par
                   with that of the ground-truth samples. The corpus is freely
                   available for download from
                   \textbackslashurl\{http://www.openslr.org/141/\}.",
  month         =  may,
  year          =  2023,
  copyright     = "http://creativecommons.org/licenses/by-nc-nd/4.0/",
  archivePrefix = "arXiv",
  primaryClass  = "eess.AS",
  eprint        = "2305.18802"
}
Downloads last month
0
Edit dataset card