Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: "en"
|
3 |
+
thumbnail:
|
4 |
+
tags:
|
5 |
+
- Source Separation
|
6 |
+
- Speech Separation
|
7 |
+
- Audio Source Separation
|
8 |
+
- Libri3Mix
|
9 |
+
- SepFormer
|
10 |
+
- Transformer
|
11 |
+
- audio-to-audio
|
12 |
+
- audio-source-separation
|
13 |
+
- speechbrain
|
14 |
+
license: "apache-2.0"
|
15 |
+
datasets:
|
16 |
+
- Libri3Mix
|
17 |
+
metrics:
|
18 |
+
- SI-SNRi
|
19 |
+
- SDRi
|
20 |
+
|
21 |
+
---
|
22 |
+
|
23 |
+
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
|
24 |
+
<br/><br/>
|
25 |
+
|
26 |
+
# SepFormer trained on Libri3Mix
|
27 |
+
|
28 |
+
This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2)
|
29 |
+
model, implemented with SpeechBrain, and pretrained on Libri3Mix dataset. For a better experience we encourage you to learn more about
|
30 |
+
[SpeechBrain](https://speechbrain.github.io). The model performance is 19.8 dB SI-SNRi on the test set of Libri3Mix dataset.
|
31 |
+
|
32 |
+
| Release | Test-Set SI-SNRi | Test-Set SDRi |
|
33 |
+
|:-------------:|:--------------:|:--------------:|
|
34 |
+
| 16-09-22 | 19.0dB | 19.4dB |
|
35 |
+
|
36 |
+
|
37 |
+
## Install SpeechBrain
|
38 |
+
|
39 |
+
First of all, please install SpeechBrain with the following command:
|
40 |
+
|
41 |
+
```
|
42 |
+
pip install speechbrain
|
43 |
+
```
|
44 |
+
|
45 |
+
Please notice that we encourage you to read our tutorials and learn more about
|
46 |
+
[SpeechBrain](https://speechbrain.github.io).
|
47 |
+
|
48 |
+
### Perform source separation on your own audio file
|
49 |
+
|
50 |
+
```python
|
51 |
+
from speechbrain.pretrained import SepformerSeparation as separator
|
52 |
+
import torchaudio
|
53 |
+
|
54 |
+
model = separator.from_hparams(source="speechbrain/sepformer-libri3mix", savedir='pretrained_models/sepformer-libri3mix')
|
55 |
+
|
56 |
+
est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav')
|
57 |
+
|
58 |
+
torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
|
59 |
+
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
|
60 |
+
torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(), 8000)
|
61 |
+
|
62 |
+
```
|
63 |
+
|
64 |
+
The system expects input recordings sampled at 8kHz (single channel).
|
65 |
+
If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface.
|
66 |
+
|
67 |
+
### Inference on GPU
|
68 |
+
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
69 |
+
|
70 |
+
### Training
|
71 |
+
The model was trained with SpeechBrain (fc2eabb7).
|
72 |
+
To train it from scratch follows these steps:
|
73 |
+
1. Clone SpeechBrain:
|
74 |
+
```bash
|
75 |
+
git clone https://github.com/speechbrain/speechbrain/
|
76 |
+
```
|
77 |
+
2. Install it:
|
78 |
+
```
|
79 |
+
cd speechbrain
|
80 |
+
pip install -r requirements.txt
|
81 |
+
pip install -e .
|
82 |
+
```
|
83 |
+
|
84 |
+
3. Run Training:
|
85 |
+
```
|
86 |
+
cd recipes/LibriMix/separation
|
87 |
+
python train.py hparams/sepformer.yaml --data_folder=your_data_folder
|
88 |
+
```
|
89 |
+
Note: change num_spks to 3 in the yaml file.
|
90 |
+
|
91 |
+
|
92 |
+
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1DN49LtAs6cq1X0jZ8tRMlh2Pj6AecClz).
|
93 |
+
|
94 |
+
### Limitations
|
95 |
+
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
|
96 |
+
|
97 |
+
#### Referencing SpeechBrain
|
98 |
+
|
99 |
+
```bibtex
|
100 |
+
@misc{speechbrain,
|
101 |
+
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
|
102 |
+
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
|
103 |
+
year={2021},
|
104 |
+
eprint={2106.04624},
|
105 |
+
archivePrefix={arXiv},
|
106 |
+
primaryClass={eess.AS},
|
107 |
+
note={arXiv:2106.04624}
|
108 |
+
}
|
109 |
+
```
|
110 |
+
|
111 |
+
|
112 |
+
#### Referencing SepFormer
|
113 |
+
```bibtex
|
114 |
+
@inproceedings{subakan2021attention,
|
115 |
+
title={Attention is All You Need in Speech Separation},
|
116 |
+
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
|
117 |
+
year={2021},
|
118 |
+
booktitle={ICASSP 2021}
|
119 |
+
}
|
120 |
+
|
121 |
+
@misc{subakan2022sepformer
|
122 |
+
author = {Subakan, Cem and Ravanelli, Mirco and Cornell, Samuele and Grondin, Francois and Bronzi, Mirko},
|
123 |
+
title = {On Using Transformers for Speech-Separation},
|
124 |
+
year = {2022},
|
125 |
+
copyright = {arXiv.org perpetual, non-exclusive license}
|
126 |
+
}
|
127 |
+
|
128 |
+
```
|
129 |
+
|
130 |
+
# **About SpeechBrain**
|
131 |
+
- Website: https://speechbrain.github.io/
|
132 |
+
- Code: https://github.com/speechbrain/speechbrain/
|
133 |
+
- HuggingFace: https://huggingface.co/speechbrain/
|