LSVSC / README.md
doof-ferb's picture
Update README.md
6207435 verified
metadata
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
  - text-to-speech
language:
  - vi
pretty_name: a novel large-scale Vietnamese speech corpus (LSVSC)
size_categories:
  - 10K<n<100K
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: transcription
      dtype: string
    - name: topic
      dtype: string
    - name: gender
      dtype: string
    - name: dialect
      dtype: string
    - name: emotion
      dtype: string
    - name: age
      dtype: string
  splits:
    - name: train
      num_bytes: 8620435812.644
      num_examples: 45458
    - name: validation
      num_bytes: 1102706521.852
      num_examples: 5682
    - name: test
      num_bytes: 1136311929.744
      num_examples: 5683
  download_size: 11575801683
  dataset_size: 10859454264.239998
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

unofficial mirror of LSVSC dataset (novel large-scale Vietnamese speech corpus)

official announcement: https://www.mdpi.com/2079-9292/13/5/977

official download: https://drive.google.com/drive/folders/1tiPKaIOC7bt6isv5qFqf61O_2jFK8ZOI

100h, 57k samples

pre-process: remove non-sense characters: \r \n

need to do: check misspelling, restore foreign words phonetised to vietnamese

usage with HuggingFace:

# pip install -q "datasets[audio]"
from datasets import load_dataset
from torch.utils.data import DataLoader

dataset = load_dataset("doof-ferb/LSVSC", split="train", streaming=True)
dataset.set_format(type="torch", columns=["audio", "transcription"])
dataloader = DataLoader(dataset, batch_size=4)