Update README.md
Browse files
README.md
CHANGED
|
@@ -38,3 +38,32 @@ configs:
|
|
| 38 |
- split: generated
|
| 39 |
path: data/generated-*
|
| 40 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
- split: generated
|
| 39 |
path: data/generated-*
|
| 40 |
---
|
| 41 |
+
|
| 42 |
+
# Hy-Generated Audio Data with CV20.0
|
| 43 |
+
|
| 44 |
+
This dataset provides Armenian speech data consisting of both real and generated audio clips.
|
| 45 |
+
|
| 46 |
+
- The `train`, `test`, and `eval` splits are derived from the [Common Voice 20.0](https://commonvoice.mozilla.org/en/datasets) Armenian dataset.
|
| 47 |
+
- The `generated` split contains 100,000 high-quality clips synthesized using a fine-tuned [F5-TTS](https://github.com/f5-lab/tts) model, covering 404 equal distribution of synthetic voices.
|
| 48 |
+
|
| 49 |
+
---
|
| 50 |
+
|
| 51 |
+
## 📊 Dataset Statistics
|
| 52 |
+
|
| 53 |
+
| Split | # Clips | Duration (hours) |
|
| 54 |
+
|-------------|-----------|------------------|
|
| 55 |
+
| `train` | 9,300 | 13.53 |
|
| 56 |
+
| `test` | 5,818 | 9.16 |
|
| 57 |
+
| `eval` | 5,856 | 8.76 |
|
| 58 |
+
| `generated` | 100,000 | 113.61 |
|
| 59 |
+
|
| 60 |
+
**Total duration:** ~**145 hours**
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
## 🛠️ Loading the Dataset
|
| 64 |
+
|
| 65 |
+
```python
|
| 66 |
+
from datasets import load_dataset
|
| 67 |
+
|
| 68 |
+
dataset = load_dataset("ErikMkrtchyan/Hy-Generated-audio-data-with-cv20.0")
|
| 69 |
+
```
|