|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- automatic-speech-recognition |
|
- text-to-speech |
|
language: |
|
- vi |
|
pretty_name: VAIS-1000 |
|
size_categories: |
|
- n<1K |
|
dataset_info: |
|
features: |
|
- name: audio |
|
dtype: audio |
|
- name: transcription |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 187348211 |
|
num_examples: 1000 |
|
download_size: 169120503 |
|
dataset_size: 187348211 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# unofficial mirror of VAIS-1000 |
|
|
|
official announcement: https://vais.vn/vi/tai-ve/hts_for_vietnamese (dead) |
|
|
|
mirror: https://github.com/undertheseanlp/text_to_speech/tree/run/data/vais1000/raw |
|
|
|
small only 1h40min audio - 1 speaker (female northern accent) - 1k samples |
|
|
|
pre-process: none |
|
|
|
need to do: check misspelling, restore foreign words phonetised to vietnamese |
|
|
|
usage with HuggingFace: |
|
```python |
|
# pip install -q "datasets[audio]" |
|
from datasets import load_dataset |
|
from torch.utils.data import DataLoader |
|
|
|
dataset = load_dataset("doof-ferb/vais1000", split="train") |
|
dataset.set_format(type="torch", columns=["audio", "transcription"]) |
|
dataloader = DataLoader(dataset, batch_size=4) |
|
``` |