The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dynamic Audio Data Augmentation
Key Benefits
Enhanced Robustness: By varying spectrogram parameters and injecting realistic noise, our models learn to handle a wide range of audio conditions.
Low Overhead: The augmentation is integrated into the existing pipeline, ensuring minimal additional computational cost. Data collator (low overhead) versus Dataset (higher overhead)
On-the-Fly Spectrogram Parameter Adjustment:
n_fft and hop_length: Values for n_fft and hop_length are randomly selected from predefined ranges for each audio sample, providing varied spectrogram representations.
Log-Mel Modulation:
Augmentation process integrates with the existing log-Mel spectrogram calculation. This means we modulate the parameters of the log-Mel spectrogram dynamically, ensuring no additional overhead is introduced while providing effective data augmentation.
Efficiency and Performance
Log-Mel Spectrogram Manipulation:
Augmentation process seamlessly integrates into the existing log-Mel spectrogram calculation, adding no extra overhead. This efficient design ensures that our preprocessing remains computationally lightweight and fast.
Adaptive Context-Aware Noise Injection
Preprocessing pipeline that includes adaptive context-aware noise injection to enhance model robustness. This method dynamically adjusts noise intensity based on the amplitude of the audio signal, ensuring realistic and effective augmentation.
- Types of Noise: White, pink, and environmental noise.
- Dynamic Adjustment: Noise intensity is scaled based on the amplitude of the audio signal.
- Integration: The noise injection process is seamlessly integrated into our existing log-Mel spectrogram calculation pipeline, adding minimal overhead.
Key Benefits
- Improved Generalization: Models become more resilient to noise and diverse audio conditions.
- Low Overhead: The augmentation process leverages the existing pipeline, ensuring efficient computation without significant additional cost.
Example Usage
## HF transformers or pure pytorch
data_collator = DataCollatorSpeechSeq2SeqWithPadding(
processor=processor,
decoder_start_token_id=model.config.decoder_start_token_id,
apply_augmentation=True,
apply_noise_injection=True # Enable adaptive noise injection
)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=True, collate_fn=data_collator)
for batch in dataloader:
outputs = model(batch)
- Downloads last month
- 6