ASCEND-phoneme / README.md
katyayego's picture
Update README.md
be46636 verified
|
raw
history blame
No virus
2.84 kB
metadata
license: cc-by-sa-4.0
dataset_info:
  features:
    - name: id
      dtype: string
    - name: path
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: transcription
      dtype: string
    - name: duration
      dtype: float32
    - name: language
      dtype: string
    - name: original_speaker_id
      dtype: int64
    - name: session_id
      dtype: int64
    - name: topic
      dtype: string
    - name: phonetic_trans
      dtype: string
  splits:
    - name: train
      num_bytes: 1015539069.14
      num_examples: 9869
    - name: test
      num_bytes: 106265310.135
      num_examples: 1315
    - name: validation
      num_bytes: 106871907.43
      num_examples: 1130
  download_size: 1224216556
  dataset_size: 1228676286.7050002
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*
task_categories:
  - automatic-speech-recognition
language:
  - en
  - zh
tags:
  - phonetic transcription
size_categories:
  - 10K<n<100K

Dataset Summary

This dataset is a modified version of the ASCEND dataset which consists of spontaneous Mandarin-English code-switched speech. The ASCEND dataset was published by Lovenia et al. (2022) (Check here for the dataset and here for the paper). This dataset adds a phonetic transcription column to the dataset using the eSpeak backend from the phonemizer library created by Bernard et al. (2021) (Check it out here).

the following documentation is a modified version of the ASCEND documentation

Usage

To use the full dataset, run this:

from datasets import load_dataset

ds = load_dataset("katyayego/ASCEND-phoneme")

Dataset Structure

A typical data point comprises the path to the audio file, the loaded audio array, and its transcription and phonetic transcription. Additional fields include datapoint id, duration, language, speaker id, session id, and topic.

{
  'id': '00644',
  'path': 'example_path.wav',
  'audio': {
      'path': 'example_audio.wav',
      'array': array([-0.00320435, -0.00296021, -0.00039673, ...,  0.00198364,
        0.00158691, -0.00061035]),
          'sampling_rate': 16000
  },
  'transcription': '我们应该去帮助他们说',
  'duration': 2.119999885559082,
  'language': 'zh',
  'original_speaker_id': 9,
  'session_id': 3,
  'topic': 'technology',
  'phonetic_trans': 'w o2 m ə1 n j i5 ŋ k ai5 tɕh y5 p a5 ŋ ts. u5 th ɑ5 m ə1 n s. w o5'
}

Data Splits

Number of utterances: 9,869 train, 1,130 validation, and 1,315 test.

Licensing Information

Creative Common Attribution Share-Alike 4.0 International (CC-BY-SA 4.0)