The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/davidscripka/openwakeword_features/openwakeword_features.py or any data file in the same directory. Couldn't find 'davidscripka/openwakeword_features' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in davidscripka/openwakeword_features. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1491, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/davidscripka/openwakeword_features/openwakeword_features.py or any data file in the same directory. Couldn't find 'davidscripka/openwakeword_features' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in davidscripka/openwakeword_features.

Need help to make the dataset viewer work? Open a discussion for direct support.

This dataset contains precomputed audio features designed for use with the openWakeWord library. Specifically, they are intended to be used as general purpose negative data (that is, data that does not contain the target wake word/phrase) for training custom openWakeWord models.

The individual .npy files in this dataset are not original audio data, but rather are low dimensional audio features produced by a pre-trained speech embedding model from Google. openWakeWord uses these features as inputs to custom word/phrase detection models.

The dataset currently contains precomputed features from the following datasets.

ACAV100M

The ACAV100M dataset contains a highly diverse set of audio data with multilingual speech, noise, music, all captured in real-world environments. This is a highly effective dataset for training custom openwakeword models.

Dataset source: https://acav100m.github.io/

Size: An array of shape (5625000, 16, 96), corresponding to ~2000 hours of audio from the ACAV100M dataset. Each row in the array has a temporal dimension of 16, which at 80 ms per temporal step results in each row containing features representing 1.28 seconds of audio.

False-Positive Validation Set

This is a hand-selected combination of audio features (representing ~11 hours of total audio) that serves as a false-positive validation set when training custom openWakeWord models. It is intended to be broadly representative of the different types of environments where openWakeWord models could be deployed, and thus useful for estimating false-positive rates.

The contributing audio datasets are:

  1. The entire DiPCo dataset (~5.3 hours)
  2. Selected clips from the Santa Barbara Corpus of Spoken American English (~3.7 hours)
  3. Selected clips from the MUSDB Music Dataset (2 hours)

Note that the MUSDB audio data was first reverberated with the MIT impulse response recordings to make it more representative of real-world deployments.

Downloads last month
0
Edit dataset card