The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 159, in compute
                  compute_split_names_from_info_response(
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 131, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 567, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 75, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open a discussion for direct support.

This is a subset of the Multilingual Spoken Word Corpus dataset, which is built specifically for the Few-shot Class-incremental Learning (FSCIL) task.

A total of 15 languages are chosen, split into 5 base languages (English, German, Catalan, French, Kinyarwanda) and 10 incrementally learned languages (Persian, Spanish, Russian, Welsh, Italian, Basque, Polish, Esparanto, Portuguese, Dutch). The FSCIL task entails first training a model using abundant training data on words from the 5 base languages, then in subsequent incremental sessions the model must learn new words from an incremental language with few training examples for each, while retaining knowledge of all prior learned words.

Each of the 5 base languages consists of 20 classes, with 500/100/100 samples for train/val/test splits each. Each of the 10 incremental languages consists of 10 classes, each with 200 available samples. From these, a small number (e.g., 5) will be chosen for few-shot training, and 100 other samples are chosen for testing. Thus, the model first has a knowledge base of 100 words from the base classes, which expands to 200 words by the end of all incremental sessions.

By default, the NeuroBench harness will install the 48kHz opus formatted data. Converted audio files to 16kHz wav is also available to be downloaded from this repository.

Downloads last month
2