BabblePhon / README.md
Korakoe's picture
Upload dataset
a7aff46 verified
metadata
language:
  - en
size_categories:
  - 10K<n<100K
task_categories:
  - text2text-generation
  - text-to-audio
  - translation
pretty_name: BabblePhon
dataset_info:
  features:
    - name: original
      dtype: string
    - name: phonemes
      dtype: string
  splits:
    - name: train
      num_bytes: 2838381
      num_examples: 12406
  download_size: 1835746
  dataset_size: 2838381
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
tags:
  - Phonemes
  - Text2Text
  - Text2Phonemes

BabblePhon

Introduction

Welcome to the BabblePhon dataset! This dataset consists of 12,406 text-phoneme pairs. The primary objective of this dataset is to serve as a resource for training a Text2Text to translate text into a context-aware phoneme transcriptions.

Description

The dataset contains synthetic text-phoneme pairs generated for the purpose of training machine learning models. Each entry in the dataset consists of a piece of text paired with its corresponding phoneme transcription.

We employed the use of this prompt when generating synthetic data:

Your purpose is to transcribe text provided into IPA, you will respond with only
the IPA transcription and nothing else, add quotes if present, keep punctuation. 

for example, you should return it like this (not following this format will break scripts, so follow this):
Original: yeah, thats if i graduate
IPA: jə, ðæts ɪf aɪ ˈɡræʤuˌeɪt

transcribe: {to_transcribe}

Data Quality

It's important to note that this dataset has not undergone manual cleaning. As a result, it may contain errors and inaccuracies. Users should exercise caution when utilizing the dataset for training or evaluation purposes.

Usage

Researchers and developers can leverage this dataset for various natural language processing tasks, particularly those involving phoneme transcriptions. However, it's recommended to perform additional preprocessing and validation to address potential data inconsistencies.

Acknowledgments

@article{panayotov2018libritts,
  title={LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech},
  author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
  journal={arXiv preprint arXiv:1904.02882},
  year={2018}
}

@article{yamagishi2019full,
  title={A full-bandwidth open-source vocoder for high-quality speech synthesis},
  author={Yamagishi, Junichi and Veaux, Christophe and MacDonald, Kirsten and King, Simon},
  year={2019},
  publisher={The Centre for Speech Technology Research (CSTR)},
}