ksingla025 commited on
Commit
375b45f
·
verified ·
1 Parent(s): 4dab24e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -146
README.md CHANGED
@@ -72,149 +72,3 @@ This dataset contains both metadata and audio files for English speech recogniti
72
  "duration": 6.87
73
  }
74
  ```
75
-
76
-
77
- ## Training NeMo Conformer ASR
78
-
79
- ### 1. Pull and Run NeMo Docker
80
- ```bash
81
- # Pull the NeMo Docker image
82
- docker pull nvcr.io/nvidia/nemo:24.05
83
-
84
- # Run the container with GPU support
85
- docker run --gpus all -it --rm \
86
- -v /external1:/external1 \
87
- -v /external2:/external2 \
88
- -v /external3:/external3 \
89
- --shm-size=8g \
90
- -p 8888:8888 -p 6006:6006 \
91
- --ulimit memlock=-1 \
92
- --ulimit stack=67108864 \
93
- nvcr.io/nvidia/nemo:24.05
94
- ```
95
-
96
- ### 2. Create Training Script
97
- Create a script `train_nemo_asr.py`:
98
- ```python
99
- from nemo.collections.asr.models import EncDecCTCModel
100
- from nemo.collections.asr.data.audio_to_text import TarredAudioToTextDataset
101
- import pytorch_lightning as pl
102
- from omegaconf import OmegaConf
103
- import os
104
-
105
- # Load the dataset from Hugging Face
106
- from datasets import load_dataset
107
- dataset = load_dataset("WhissleAI/Meta_STT_EN_Set2")
108
-
109
- # Create config
110
- config = OmegaConf.create({
111
- 'model': {
112
- 'name': 'EncDecCTCModel',
113
- 'train_ds': {
114
- 'manifest_filepath': None, # Will be set dynamically
115
- 'batch_size': 32,
116
- 'shuffle': True,
117
- 'num_workers': 4,
118
- 'pin_memory': True,
119
- 'use_start_end_token': False,
120
- },
121
- 'validation_ds': {
122
- 'manifest_filepath': None, # Will be set dynamically
123
- 'batch_size': 32,
124
- 'shuffle': False,
125
- 'num_workers': 4,
126
- 'pin_memory': True,
127
- 'use_start_end_token': False,
128
- },
129
- 'optim': {
130
- 'name': 'adamw',
131
- 'lr': 0.001,
132
- 'weight_decay': 0.01,
133
- },
134
- 'trainer': {
135
- 'devices': 1,
136
- 'accelerator': 'gpu',
137
- 'max_epochs': 100,
138
- 'precision': 16,
139
- }
140
- }
141
- })
142
-
143
- # Initialize model
144
- model = EncDecCTCModel(cfg=config.model)
145
-
146
- # Create trainer
147
- trainer = pl.Trainer(**config.model.trainer)
148
-
149
- # Train
150
- trainer.fit(model)
151
- ```
152
-
153
- ### 3. Create Config File
154
- Create a config file `config.yaml`:
155
- ```yaml
156
- model:
157
- name: "EncDecCTCModel"
158
- train_ds:
159
- manifest_filepath: "train.json"
160
- batch_size: 32
161
- shuffle: true
162
- num_workers: 4
163
- pin_memory: true
164
- use_start_end_token: false
165
-
166
- validation_ds:
167
- manifest_filepath: "valid.json"
168
- batch_size: 32
169
- shuffle: false
170
- num_workers: 4
171
- pin_memory: true
172
- use_start_end_token: false
173
-
174
- optim:
175
- name: adamw
176
- lr: 0.001
177
- weight_decay: 0.01
178
-
179
- trainer:
180
- devices: 1
181
- accelerator: "gpu"
182
- max_epochs: 100
183
- precision: 16
184
- ```
185
-
186
- ### 4. Start Training
187
- ```bash
188
- # Inside the NeMo container
189
- python -m torch.distributed.launch --nproc_per_node=1 \
190
- train_nemo_asr.py \
191
- --config-path=. \
192
- --config-name=config.yaml
193
- ```
194
-
195
- ## Usage Notes
196
-
197
- 1. The dataset includes both metadata and audio files.
198
- 2. Audio files are stored in the dataset repository.
199
- 3. For optimal performance:
200
- - Use a GPU with at least 16GB VRAM
201
- - Adjust batch size based on your GPU memory
202
- - Consider gradient accumulation for larger effective batch sizes
203
- - Monitor training with TensorBoard (accessible via port 6006)
204
-
205
- ## Common Issues and Solutions
206
-
207
- 1. **Memory Issues**:
208
- - Reduce batch size if you encounter OOM errors
209
- - Use gradient accumulation for larger effective batch sizes
210
- - Enable mixed precision training (fp16)
211
-
212
- 2. **Training Speed**:
213
- - Increase num_workers based on your CPU cores
214
- - Use pin_memory=True for faster data transfer to GPU
215
- - Consider using tarred datasets for faster I/O
216
-
217
- 3. **Model Performance**:
218
- - Adjust learning rate based on your batch size
219
- - Use learning rate warmup for better convergence
220
- - Consider using a pretrained model as initialization
 
72
  "duration": 6.87
73
  }
74
  ```