The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    DataFilesNotFoundError
Message:      No (supported) data files found in collabora/whisperspeech-librilight
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1873, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1854, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1245, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 595, in infer_module_for_data_files
                  raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
              datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in collabora/whisperspeech-librilight

Need help to make the dataset viewer work? Open a discussion for direct support.

This is a processed LibriLight dataset ready for training the WhisperSpeech models.

See https://github.com/collabora/WhisperSpeech for more details.

Quick start

If you want to quickly train a basic WhisperSpeech model you can start by downloading the small subset:

# magic includes to download only the small and validation data splits and the accompanying config files
huggingface-cli download --repo-type dataset --include '*-small-*' '*small.dataset' '*-speakers*' --local-dir . -- collabora/whisperspeech-librilight

# download the semantic token model to extract the token embeddings from it
huggingface-cli download collabora/whisperspeech whisper-vq-stoks-medium-en+pl.model

# the T2S training invocation:
python3 -m whisperspeech.train_multi \
  --task "t2s_up_wds_mlang_enclm base --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \
  --batch-size 32 --accumulate-grad-batches 2 \
  --epochs 2 --lr-schedule wsd \
  --tunables="--cps_input --causal_encoder --warmup_steps=300 --encoder_depth_ratio=.25" \
  --dataset-config=--vq_codes=513 \
  --training-data @librilight-t2s-train-small.dataset \
  --validation-data @librilight-t2s-val-common-speakers.dataset \
  --validation-data @librilight-t2s-val-unseen-speakers.dataset \
  --monitored-metric 'val_loss/dataloader_idx_0'

# the S2A training invocation:
python3 -m whisperspeech.train_multi \
  --task "s2a_delar_mup_wds_mlang tiny --quantizers 4 --spk_width=192 --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \
  --batch-size 48 \
  --epochs 4 --lr-schedule wsd \
  --tunables="--rope --warmup_steps=300" \
  --dataset-config=--vq_codes=513 \
  --training-data @librilight-s2a-train-small.dataset \
  --validation-data @librilight-s2a-val-common-speakers.dataset \
  --validation-data @librilight-s2a-val-unseen-speakers.dataset \
  --monitored-metric 'val_loss/dataloader_idx_0'

The --accumulate-grad-batches option is set to get a good effective batch size a single 4090 GPU. If you have multiple GPUs it will probably make sense to lower the batch size. For example 16 GPUs with a batch size of 16 seem to be give good performance and fast training.

Because we use Maximum Update Parametrization, higher effective batch sizes always result in lower losses and you don't need to adjust the learning rate. Unfortunately the effect is not linear so there is an optimal batch size and there is little benefit to increase it further.

Downloads last month
6
Edit dataset card