Datasets:
Formats:
parquet
Languages:
English
Size:
10K - 100K
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,32 +1,205 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Meta Speech Recognition English Dataset (Set 2)
|
| 2 |
+
|
| 3 |
+
This dataset contains both metadata and audio files for English speech recognition samples.
|
| 4 |
+
|
| 5 |
+
## Dataset Statistics
|
| 6 |
+
|
| 7 |
+
### Splits and Sample Counts
|
| 8 |
+
- **train**: 42961 samples
|
| 9 |
+
- **valid**: 2387 samples
|
| 10 |
+
- **test**: 2387 samples
|
| 11 |
+
|
| 12 |
+
## Example Samples
|
| 13 |
+
### train
|
| 14 |
+
```json
|
| 15 |
+
{
|
| 16 |
+
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/AzSutepklXI_2.wav",
|
| 17 |
+
"text": "To Jesus, so God is faithful, because when he keeps, you know, when, when when you ask him to do something, he keeps his What. He kept, his cobonut With Jacob, you know through the years, he. AGE_18_30 GER_FEMALE EMOTION_ANG INTENT_INFORM",
|
| 18 |
+
"duration": 14.79
|
| 19 |
+
}
|
| 20 |
+
```
|
| 21 |
+
```json
|
| 22 |
+
{
|
| 23 |
+
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/ZwNpsW26jp4_13.wav",
|
| 24 |
+
"text": "Oppressortein would like to cover a host of things, but the first thing we'd like to find out is how is the Israeli economy doing today. AGE_30_45 GER_MALE EMOTION_NEU INTENT_QUESTION",
|
| 25 |
+
"duration": 6.39
|
| 26 |
+
}
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
### valid
|
| 30 |
+
```json
|
| 31 |
+
{
|
| 32 |
+
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/76yqm7rlKnE_4.wav",
|
| 33 |
+
"text": "And disowns throughout its entirety. AGE_18_30 GER_MALE EMOTION_NEU INTENT_INFORM",
|
| 34 |
+
"duration": 3.0
|
| 35 |
+
}
|
| 36 |
+
```
|
| 37 |
+
```json
|
| 38 |
+
{
|
| 39 |
+
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/I-9GBnlAl_U_5.wav",
|
| 40 |
+
"text": "You students who continue to edify me daily in my life. AGE_30_45 GER_FEMALE EMOTION_ANG INTENT_INFORM",
|
| 41 |
+
"duration": 3.34
|
| 42 |
+
}
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
### test
|
| 46 |
+
```json
|
| 47 |
+
{
|
| 48 |
+
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/LNtQPSUi1iQ_10.wav",
|
| 49 |
+
"text": "Know some details about the bild you can drill into individual. AGE_30_45 GER_MALE EMOTION_ANG INTENT_QUESTION",
|
| 50 |
+
"duration": 3.06
|
| 51 |
+
}
|
| 52 |
+
```
|
| 53 |
+
```json
|
| 54 |
+
{
|
| 55 |
+
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/9VU0GVCW0G4_10.wav",
|
| 56 |
+
"text": "Just be shuffling papers, so we vet each Parker and make a partner and make sure that they are going to provide students with. AGE_30_45 GER_MALE EMOTION_NEU INTENT_INFORM",
|
| 57 |
+
"duration": 6.87
|
| 58 |
+
}
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
## Training NeMo Conformer ASR
|
| 63 |
+
|
| 64 |
+
### 1. Pull and Run NeMo Docker
|
| 65 |
+
```bash
|
| 66 |
+
# Pull the NeMo Docker image
|
| 67 |
+
docker pull nvcr.io/nvidia/nemo:24.05
|
| 68 |
+
|
| 69 |
+
# Run the container with GPU support
|
| 70 |
+
docker run --gpus all -it --rm \
|
| 71 |
+
-v /external1:/external1 \
|
| 72 |
+
-v /external2:/external2 \
|
| 73 |
+
-v /external3:/external3 \
|
| 74 |
+
--shm-size=8g \
|
| 75 |
+
-p 8888:8888 -p 6006:6006 \
|
| 76 |
+
--ulimit memlock=-1 \
|
| 77 |
+
--ulimit stack=67108864 \
|
| 78 |
+
nvcr.io/nvidia/nemo:24.05
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
### 2. Create Training Script
|
| 82 |
+
Create a script `train_nemo_asr.py`:
|
| 83 |
+
```python
|
| 84 |
+
from nemo.collections.asr.models import EncDecCTCModel
|
| 85 |
+
from nemo.collections.asr.data.audio_to_text import TarredAudioToTextDataset
|
| 86 |
+
import pytorch_lightning as pl
|
| 87 |
+
from omegaconf import OmegaConf
|
| 88 |
+
import os
|
| 89 |
+
|
| 90 |
+
# Load the dataset from Hugging Face
|
| 91 |
+
from datasets import load_dataset
|
| 92 |
+
dataset = load_dataset("WhissleAI/Meta_STT_EN_Set2")
|
| 93 |
+
|
| 94 |
+
# Create config
|
| 95 |
+
config = OmegaConf.create({
|
| 96 |
+
'model': {
|
| 97 |
+
'name': 'EncDecCTCModel',
|
| 98 |
+
'train_ds': {
|
| 99 |
+
'manifest_filepath': None, # Will be set dynamically
|
| 100 |
+
'batch_size': 32,
|
| 101 |
+
'shuffle': True,
|
| 102 |
+
'num_workers': 4,
|
| 103 |
+
'pin_memory': True,
|
| 104 |
+
'use_start_end_token': False,
|
| 105 |
+
},
|
| 106 |
+
'validation_ds': {
|
| 107 |
+
'manifest_filepath': None, # Will be set dynamically
|
| 108 |
+
'batch_size': 32,
|
| 109 |
+
'shuffle': False,
|
| 110 |
+
'num_workers': 4,
|
| 111 |
+
'pin_memory': True,
|
| 112 |
+
'use_start_end_token': False,
|
| 113 |
+
},
|
| 114 |
+
'optim': {
|
| 115 |
+
'name': 'adamw',
|
| 116 |
+
'lr': 0.001,
|
| 117 |
+
'weight_decay': 0.01,
|
| 118 |
+
},
|
| 119 |
+
'trainer': {
|
| 120 |
+
'devices': 1,
|
| 121 |
+
'accelerator': 'gpu',
|
| 122 |
+
'max_epochs': 100,
|
| 123 |
+
'precision': 16,
|
| 124 |
+
}
|
| 125 |
+
}
|
| 126 |
+
})
|
| 127 |
+
|
| 128 |
+
# Initialize model
|
| 129 |
+
model = EncDecCTCModel(cfg=config.model)
|
| 130 |
+
|
| 131 |
+
# Create trainer
|
| 132 |
+
trainer = pl.Trainer(**config.model.trainer)
|
| 133 |
+
|
| 134 |
+
# Train
|
| 135 |
+
trainer.fit(model)
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
### 3. Create Config File
|
| 139 |
+
Create a config file `config.yaml`:
|
| 140 |
+
```yaml
|
| 141 |
+
model:
|
| 142 |
+
name: "EncDecCTCModel"
|
| 143 |
+
train_ds:
|
| 144 |
+
manifest_filepath: "train.json"
|
| 145 |
+
batch_size: 32
|
| 146 |
+
shuffle: true
|
| 147 |
+
num_workers: 4
|
| 148 |
+
pin_memory: true
|
| 149 |
+
use_start_end_token: false
|
| 150 |
+
|
| 151 |
+
validation_ds:
|
| 152 |
+
manifest_filepath: "valid.json"
|
| 153 |
+
batch_size: 32
|
| 154 |
+
shuffle: false
|
| 155 |
+
num_workers: 4
|
| 156 |
+
pin_memory: true
|
| 157 |
+
use_start_end_token: false
|
| 158 |
+
|
| 159 |
+
optim:
|
| 160 |
+
name: adamw
|
| 161 |
+
lr: 0.001
|
| 162 |
+
weight_decay: 0.01
|
| 163 |
+
|
| 164 |
+
trainer:
|
| 165 |
+
devices: 1
|
| 166 |
+
accelerator: "gpu"
|
| 167 |
+
max_epochs: 100
|
| 168 |
+
precision: 16
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
### 4. Start Training
|
| 172 |
+
```bash
|
| 173 |
+
# Inside the NeMo container
|
| 174 |
+
python -m torch.distributed.launch --nproc_per_node=1 \
|
| 175 |
+
train_nemo_asr.py \
|
| 176 |
+
--config-path=. \
|
| 177 |
+
--config-name=config.yaml
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
+
## Usage Notes
|
| 181 |
+
|
| 182 |
+
1. The dataset includes both metadata and audio files.
|
| 183 |
+
2. Audio files are stored in the dataset repository.
|
| 184 |
+
3. For optimal performance:
|
| 185 |
+
- Use a GPU with at least 16GB VRAM
|
| 186 |
+
- Adjust batch size based on your GPU memory
|
| 187 |
+
- Consider gradient accumulation for larger effective batch sizes
|
| 188 |
+
- Monitor training with TensorBoard (accessible via port 6006)
|
| 189 |
+
|
| 190 |
+
## Common Issues and Solutions
|
| 191 |
+
|
| 192 |
+
1. **Memory Issues**:
|
| 193 |
+
- Reduce batch size if you encounter OOM errors
|
| 194 |
+
- Use gradient accumulation for larger effective batch sizes
|
| 195 |
+
- Enable mixed precision training (fp16)
|
| 196 |
+
|
| 197 |
+
2. **Training Speed**:
|
| 198 |
+
- Increase num_workers based on your CPU cores
|
| 199 |
+
- Use pin_memory=True for faster data transfer to GPU
|
| 200 |
+
- Consider using tarred datasets for faster I/O
|
| 201 |
+
|
| 202 |
+
3. **Model Performance**:
|
| 203 |
+
- Adjust learning rate based on your batch size
|
| 204 |
+
- Use learning rate warmup for better convergence
|
| 205 |
+
- Consider using a pretrained model as initialization
|