speechbrainteam commited on
Commit
f86de30
1 Parent(s): c1f4fa3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -61,7 +61,7 @@ torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
61
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
62
 
63
  ### Training
64
- The model was trained with SpeechBrain (d0accc8).
65
  To train it from scratch follows these steps:
66
  1. Clone SpeechBrain:
67
  ```bash
@@ -78,10 +78,9 @@ pip install -e .
78
  ```
79
  cd recipes/WSJ0Mix/separation
80
  python train.py hparams/sepformer.yaml --data_folder=your_data_folder
 
81
 
82
- https://drive.google.com/drive/folders/1fcVP52gHgoMX9diNN1JxX_My5KaRNZWs?usp=sharing
83
-
84
- You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1suvbKScf3VbkxRjZlpi1Q4hKU9yTdBVM?usp=sharing)
85
 
86
  ### Limitations
87
  The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
@@ -95,7 +94,7 @@ The SpeechBrain team does not provide any warranty on the performance achieved b
95
  year = {2021},
96
  publisher = {GitHub},
97
  journal = {GitHub repository},
98
- howpublished = {\\url{https://github.com/speechbrain/speechbrain}},
99
  }
100
  ```
101
 
 
61
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
62
 
63
  ### Training
64
+ The model was trained with SpeechBrain (fc2eabb7).
65
  To train it from scratch follows these steps:
66
  1. Clone SpeechBrain:
67
  ```bash
 
78
  ```
79
  cd recipes/WSJ0Mix/separation
80
  python train.py hparams/sepformer.yaml --data_folder=your_data_folder
81
+ ```
82
 
83
+ You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1cON-eqtKv_NYnJhaE9VjLT_e2ybn-O7u?usp=sharing)
 
 
84
 
85
  ### Limitations
86
  The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
 
94
  year = {2021},
95
  publisher = {GitHub},
96
  journal = {GitHub repository},
97
+ howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
98
  }
99
  ```
100