NeMo
NeMo
speech
audio
anteju commited on
Commit
77b23b2
·
verified ·
1 Parent(s): 7a693c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +167 -3
README.md CHANGED
@@ -1,3 +1,167 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ library_name: NeMo
4
+ metrics:
5
+ - WER
6
+ - CER
7
+ tags:
8
+ - speech-recognition
9
+ - ASR
10
+ - Arabic
11
+ - Conformer
12
+ - Transducer
13
+ - CTC
14
+ - NeMo
15
+ - hf-asr-leaderboard
16
+ - speech
17
+ - audio
18
+ ---
19
+ ## Model Overview
20
+
21
+ ### Description
22
+
23
+ The model extracts speech for human or machine listeners. This is a generative speech denoising model based on the Schrödinger bridge. The model is trained on a publicly available research dataset.
24
+
25
+ This model is for research and development only.
26
+
27
+ ### License/Terms of Use
28
+ License to use this model is covered by the [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0) license.
29
+
30
+ ## References
31
+
32
+ [1] [Schrödinger Bridge for Generative Speech Enhancement](https://arxiv.org/abs/2407.16074), Interspeech, 2024.
33
+
34
+ ## Model Architecture
35
+ **Architecture Type:** Schrödinger Bridge<br>
36
+ **Network Architecture:** U-Net with convolutional layers<br>
37
+
38
+ ## Input
39
+ **Input Type(s):** Audio <br>
40
+ **Input Format(s):** .wav files <br>
41
+ **Input Parameters:** One-Dimensional (1D) <br>
42
+ **Other Properties Related to Input:** 16000 Hz Mono-channel Audio <br>
43
+
44
+ ## Output
45
+ **Output Type(s):** Audio <br>
46
+ **Output Format:** .wav files <br>
47
+ **Output Parameters:** One-Dimensional (1D) <br>
48
+ **Other Properties Related to Output:** 16000 Hz Mono-channel Audio <br>
49
+
50
+ ## Software Integration
51
+ **Runtime Engine(s):**<br>
52
+ * NeMo-2.0.0 <br>
53
+
54
+ **Supported Hardware Microarchitecture Compatibility:** <br>
55
+ * NVIDIA Ampere<br>
56
+ * NVIDIA Blackwell<br>
57
+ * NVIDIA Jetson<br>
58
+ * NVIDIA Hopper<br>
59
+ * NVIDIA Lovelace<br>
60
+ * NVIDIA Turing<br>
61
+ * NVIDIA Volta<br>
62
+
63
+ **Preferred Operating System(s)** <br>
64
+ * Linux<br>
65
+ * Windows<br>
66
+
67
+ ## Model Version(s)
68
+ `se_den_sb_16k_small_v1.0`<br>
69
+
70
+ # Training, Testing, and Evaluation Datasets
71
+
72
+ ## Training Dataset
73
+ **Link:**
74
+ [WSJ0](https://catalog.ldc.upenn.edu/LDC93S6A), [CHiME3](https://catalog.ldc.upenn.edu/LDC2017S24)
75
+
76
+ **Data Collection Method by dataset:** Human <br>
77
+
78
+ **Labeling Method by dataset:** Human<br>
79
+
80
+ **Properties (Quantity, Dataset Descriptions, Sensor(s)):**
81
+ WSJ0 was used for clean speech signals and CHiME3 was used for additive noise signals. The observed signals are generated with signal-to-noise ratios between -6dB and 14dB. The total size of the training dataset was approximately 25 hours.<br>
82
+
83
+ ## Testing Dataset
84
+ **Link:**
85
+ [WSJ0](https://catalog.ldc.upenn.edu/LDC93S6A), [CHiME3](https://catalog.ldc.upenn.edu/LDC2017S24)
86
+
87
+ **Data Collection Method by dataset:** Human <br>
88
+
89
+ **Labeling Method by dataset:** Human<br>
90
+
91
+ **Properties (Quantity, Dataset Descriptions, Sensor(s)):**
92
+ WSJ0 was used for clean speech signals and CHiME3 was used for additive noise signals. The observed signals are generated with signal-to-noise ratios between -6dB and 14dB. The total size of the testing dataset was approximately 2 hours.<br>
93
+
94
+ ## Evaluation Dataset
95
+ **Link:**
96
+ [WSJ0](https://catalog.ldc.upenn.edu/LDC93S6A), [CHiME3](https://catalog.ldc.upenn.edu/LDC2017S24)
97
+
98
+ **Data Collection Method by dataset:** Human <br>
99
+
100
+ **Labeling Method by dataset:** Human<br>
101
+
102
+ **Properties (Quantity, Dataset Descriptions, Sensor(s)):**
103
+ WSJ0 was used for clean speech signals and CHiME3 was used for additive noise signals. The observed signals are generated with signal-to-noise ratios between -6dB and 14dB. The total size of the evaluation dataset was approximately 2 hours.<br>
104
+
105
+ ## Inference
106
+ **Engine:** NeMo 2.0 <br>
107
+
108
+ **Test Hardware:** NVIDIA V100<br>
109
+
110
+ # Performance
111
+
112
+ The model is trained on the training subset of the WSJ0-CHiME3 dataset using the auxiliary L1-norm loss [1].
113
+
114
+ The model is evaluated using several instrumental metrics: perceptual evaluation of speech quality (PESQ), extended short-term objective intelligibility (ESTOI) and scale-invariant signal-to-distortion ratio (SI-SDR). Word error rate (WER) is evaluated using the [FastConformer-Transducer-Large English ASR model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_conformer_transducer_large).
115
+
116
+ Metrics are reported on the test set of WSJ0-CHiME dataset using either SDE or ODE sampler.
117
+
118
+ | Signal |PESQ | ESTOI | SI-SDR/dB | WER / % |
119
+ |:-------------:|:----:|:-----:|:---------:|:-------:|
120
+ | Input | 1.35 | 0.63 | 4.0 | 12.18 |
121
+ | Processed SDE | 2.67 | 0.89 | 15.1 | 5.10 |
122
+ | Processed ODE | 2.77 | 0.90 | 16.2 | 4.13 |
123
+
124
+ # How to use this model
125
+
126
+ The model is available for use in the NVIDIA NeMo toolkit, and can be used to process audio or for fine-tuning.
127
+
128
+ ## Load the model
129
+ ```
130
+ from nemo.collections.audio.models import AudioToAudioModel
131
+ model = AudioToAudioModel.from_pretrained('nvidia/se_den_sb_16k_small')
132
+ ```
133
+
134
+ ## Process audio
135
+ A single audio file can be processed as follows
136
+
137
+ ```
138
+ import librosa
139
+ audio_in, _ = librosa.load(path_to_input_audio, sr=model.sample_rate)
140
+ audio_in_signal = torch.from_numpy(audio_in).view(1, 1, -1).to(device)
141
+ audio_in_length = torch.tensor([audio_in_signal.size(-1)]).to(device)
142
+
143
+ audio_out_signal, _ = model(input_signal=audio_in_signal, input_length=audio_in_length)
144
+ ```
145
+
146
+ For processing several audio files at once, check the [process_audio script](https://github.com/NVIDIA/NeMo/blob/main/examples/audio/process_audio.py) in NeMo.
147
+
148
+ ## Listen to audio
149
+ ```
150
+ import soundfile as sf
151
+ audio_out = audio_out_signal.cpu().numpy().squeeze()
152
+ sf.write(path_to_output_audio, audio_out, samplerate=model.sample_rate)
153
+ ```
154
+
155
+ ## Change sampler configuration
156
+ ```
157
+ model.sampler.process = 'ode' # default sampler is 'sde'
158
+ model.sampler.num_steps = 10 # default is 50 steps
159
+
160
+ audio_out_signal, _ = model(input_signal=audio_in_signal, input_length=audio_in_length)
161
+ ```
162
+
163
+
164
+ # Ethical Considerations
165
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
166
+
167
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).