Datasets:
File size: 1,359 Bytes
83dd8e6 d2c1c0a 83dd8e6 d2c1c0a 83dd8e6 d2c1c0a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 5933166725.824
num_examples: 130634
download_size: 5547933432
dataset_size: 5933166725.824
tags:
- audio
- text-to-speech
- turkish
- synthetic-voice
language:
- tr
task_categories:
- text-to-speech
---
# Dataset Card for "turkishneuralvoice"
## Dataset Overview
**Dataset Name**: Turkish Neural Voice
**Description**: This dataset contains Turkish audio samples generated using Microsoft Text to Speech services. The dataset includes audio files and their corresponding transcriptions.
## Dataset Structure
**Configs**:
- `default`
**Data Files**:
- Split: `train`
- Path: `data/train-*`
**Dataset Info**:
- Features:
- `audio`: Audio file
- `transcription`: Corresponding text transcription
- Splits:
- `train`
- Number of bytes: `5,933,166,725.824`
- Number of examples: `130,634`
- Download Size: `5,547,933,432` bytes
- Dataset Size: `5,933,166,725.824` bytes
## Usage
To load this dataset in your Python environment using Hugging Face's `datasets` library, use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("path/to/dataset/turkishneuralvoice") |