Update README.md
Browse files
README.md
CHANGED
@@ -64,6 +64,9 @@ enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.]))
|
|
64 |
torchaudio.save('enhanced.wav', enhanced.cpu(), 16000)
|
65 |
```
|
66 |
|
|
|
|
|
|
|
67 |
### Inference on GPU
|
68 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
69 |
|
|
|
64 |
torchaudio.save('enhanced.wav', enhanced.cpu(), 16000)
|
65 |
```
|
66 |
|
67 |
+
The system is trained with recordings sampled at 16kHz (single channel).
|
68 |
+
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *enhance_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *enhance_batch* as in the example.
|
69 |
+
|
70 |
### Inference on GPU
|
71 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
72 |
|