asr-alignment / README.md
nguyenvulebinh's picture
Update README.md
d4dfbb6
metadata
license: apache-2.0
dataset_info:
  - config_name: commonvoice
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: words
        sequence: string
      - name: word_start
        sequence: float64
      - name: word_end
        sequence: float64
      - name: entity_start
        sequence: int64
      - name: entity_end
        sequence: int64
      - name: entity_label
        sequence: string
    splits:
      - name: train
        num_bytes: 43744079378.659
        num_examples: 948733
      - name: valid
        num_bytes: 722372503.994
        num_examples: 16353
    download_size: 39798988113
    dataset_size: 44466451882.653
  - config_name: gigaspeech
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: words
        sequence: string
      - name: word_start
        sequence: float64
      - name: word_end
        sequence: float64
      - name: entity_start
        sequence: int64
      - name: entity_end
        sequence: int64
      - name: entity_label
        sequence: string
    splits:
      - name: train
        num_bytes: 1032024261294.48
        num_examples: 8282987
      - name: valid
        num_bytes: 1340974408.04
        num_examples: 5715
    download_size: 1148966064515
    dataset_size: 1033365235702.52
  - config_name: libris
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: words
        sequence: string
      - name: word_start
        sequence: float64
      - name: word_end
        sequence: float64
      - name: entity_start
        sequence: int64
      - name: entity_end
        sequence: int64
      - name: entity_label
        sequence: string
    splits:
      - name: train
        num_bytes: 63849575890.896
        num_examples: 281241
      - name: valid
        num_bytes: 793442600.643
        num_examples: 5559
    download_size: 61361142328
    dataset_size: 64643018491.539
  - config_name: mustc
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: words
        sequence: string
      - name: word_start
        sequence: float64
      - name: word_end
        sequence: float64
      - name: entity_start
        sequence: int64
      - name: entity_end
        sequence: int64
      - name: entity_label
        sequence: string
    splits:
      - name: train
        num_bytes: 55552777413.1
        num_examples: 248612
      - name: valid
        num_bytes: 313397447.704
        num_examples: 1408
    download_size: 52028374666
    dataset_size: 55866174860.804
  - config_name: tedlium
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: words
        sequence: string
      - name: word_start
        sequence: float64
      - name: word_end
        sequence: float64
      - name: entity_start
        sequence: int64
      - name: entity_end
        sequence: int64
      - name: entity_label
        sequence: string
    splits:
      - name: train
        num_bytes: 56248950771.568
        num_examples: 268216
      - name: valid
        num_bytes: 321930549.928
        num_examples: 1456
    download_size: 52557126451
    dataset_size: 56570881321.496
  - config_name: voxpopuli
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: words
        sequence: string
      - name: word_start
        sequence: float64
      - name: word_end
        sequence: float64
      - name: entity_start
        sequence: int64
      - name: entity_end
        sequence: int64
      - name: entity_label
        sequence: string
    splits:
      - name: train
        num_bytes: 118516424284.524
        num_examples: 182463
      - name: valid
        num_bytes: 1144543020.808
        num_examples: 1842
    download_size: 98669668241
    dataset_size: 119660967305.332
configs:
  - config_name: commonvoice
    data_files:
      - split: train
        path: commonvoice/train-*
      - split: valid
        path: commonvoice/valid-*
  - config_name: gigaspeech
    data_files:
      - split: train
        path: gigaspeech/train-*
      - split: valid
        path: gigaspeech/valid-*
  - config_name: libris
    data_files:
      - split: train
        path: libris/train-*
      - split: valid
        path: libris/valid-*
  - config_name: mustc
    data_files:
      - split: train
        path: mustc/train-*
      - split: valid
        path: mustc/valid-*
  - config_name: tedlium
    data_files:
      - split: train
        path: tedlium/train-*
      - split: valid
        path: tedlium/valid-*
  - config_name: voxpopuli
    data_files:
      - split: train
        path: voxpopuli/train-*
      - split: valid
        path: voxpopuli/valid-*
language:
  - en
pretty_name: Speech Recognition Alignment Dataset
size_categories:
  - 10M<n<100M

Speech Recognition Alignment Dataset

This dataset is a variation of several widely-used ASR datasets, encompassing Librispeech, MuST-C, TED-LIUM, VoxPopuli, Common Voice, and GigaSpeech. The difference is this dataset includes:

  • Precise alignment between audio and text.
  • Text that has been punctuated and made case-sensitive.
  • Identification of named entities in the text.

Usage

First, install the latest version of the 🤗 Datasets package:

pip install --upgrade pip
pip install --upgrade datasets[audio]

The dataset can be downloaded and pre-processed on disk using the load_dataset function:

from datasets import load_dataset

# Available dataset: 'libris','mustc','tedlium','voxpopuli','commonvoice','gigaspeech'
dataset = load_dataset("nguyenvulebinh/asr-alignment", "libris")

# take the first sample of the validation set
sample = dataset["train"][0]

It can also be streamed directly from the Hub using Datasets' streaming mode. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk:

from datasets import load_dataset

dataset = load_dataset("nguyenvulebinh/asr-alignment", "libris", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["train"]))

Citation

If you use this data, please consider citing the ICASSP 2024 Paper: SYNTHETIC CONVERSATIONS IMPROVE MULTI-TALKER ASR:

@INPROCEEDINGS{synthetic-multi-asr-nguyen,
  author={Nguyen, Thai-Binh and Waibel, Alexander},
  booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={SYNTHETIC CONVERSATIONS IMPROVE MULTI-TALKER ASR}, 
  year={2024},
  volume={},
  number={},
}

License

This dataset is licensed in accordance with the terms of the original dataset.