jgca_v2_50k_2 / README.md
sin2piusc's picture
Update README.md
9bb7716 verified
|
raw
history blame
No virus
1.33 kB
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: sentence
      dtype: string
  splits:
    - name: train
      num_bytes: 12264199958.656
      num_examples: 49504
  download_size: 11879936920
  dataset_size: 12264199958.656
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
task_categories:
  - automatic-speech-recognition
  - translation
  - text-to-speech
language:
  - ja
size_categories:
  - 10K<n<100K

common voice, google fleurs, JSUTv1.1, JAS_v2 (joujiboi/japanese-anime-speech-v2) processed for whisper. Not shuffled or normalized. 50% anime speech and 50% other. Other corpus's are fully represented.

if needed:

import neologdn import MeCab import re from transformers.models.whisper.english_normalizer import BasicTextNormalizer

wakati = MeCab.Tagger("-Owakati") special_characters = '[,\、\。\.\「\」\…\?\・!-;:"\“%\‘\”\�]'

def norm_everything(batch): batch["sentence"] = neologdn.normalize(batch["sentence"]).strip() batch["sentence"] = normalizer(batch["sentence"]).strip() batch["sentence"] = wakati.parse(batch["sentence"]).strip() batch["sentence"] = re.sub(special_characters,'', batch["sentence"]).strip() return batch

ds = ds.map(norm_everything)