victan commited on
Commit
9539e68
1 Parent(s): c68b627

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # vocal-remover
2
+
3
+ [![Release](https://img.shields.io/github/release/tsurumeso/vocal-remover.svg)](https://github.com/tsurumeso/vocal-remover/releases/latest)
4
+ [![Release](https://img.shields.io/github/downloads/tsurumeso/vocal-remover/total.svg)](https://github.com/tsurumeso/vocal-remover/releases)
5
+
6
+ This is a deep-learning-based tool to extract instrumental track from your songs.
7
+
8
+ ## Installation
9
+
10
+ ### Getting vocal-remover
11
+ Download the latest version from [here](https://github.com/tsurumeso/vocal-remover/releases).
12
+
13
+ ### Install PyTorch
14
+ **See**: [GET STARTED](https://pytorch.org/get-started/locally/)
15
+
16
+ ### Install the other packages
17
+ ```
18
+ cd vocal-remover
19
+ pip install -r requirements.txt
20
+ ```
21
+
22
+ ## Usage
23
+ The following command separates the input into instrumental and vocal tracks. They are saved as `*_Instruments.wav` and `*_Vocals.wav`.
24
+
25
+ ### Run on CPU
26
+ ```
27
+ python inference.py --input path/to/an/audio/file
28
+ ```
29
+
30
+ ### Run on GPU
31
+ ```
32
+ python inference.py --input path/to/an/audio/file --gpu 0
33
+ ```
34
+
35
+ ### Advanced options
36
+ `--tta` option performs Test-Time-Augmentation to improve the separation quality.
37
+ ```
38
+ python inference.py --input path/to/an/audio/file --tta --gpu 0
39
+ ```
40
+
41
+ <!-- `--postprocess` option masks instrumental part based on the vocals volume to improve the separation quality.
42
+ **Experimental Warning**: If you get any problems with this option, please disable it.
43
+ ```
44
+ python inference.py --input path/to/an/audio/file --postprocess --gpu 0
45
+ ``` -->
46
+
47
+ ## Train your own model
48
+
49
+ ### Place your dataset
50
+ ```
51
+ path/to/dataset/
52
+ +- instruments/
53
+ | +- 01_foo_inst.wav
54
+ | +- 02_bar_inst.mp3
55
+ | +- ...
56
+ +- mixtures/
57
+ +- 01_foo_mix.wav
58
+ +- 02_bar_mix.mp3
59
+ +- ...
60
+ ```
61
+
62
+ ### Train a model
63
+ ```
64
+ python train.py --dataset path/to/dataset --mixup_rate 0.5 --gpu 0
65
+ ```
66
+
67
+ ## References
68
+ - [1] Jansson et al., "Singing Voice Separation with Deep U-Net Convolutional Networks", https://ejhumphrey.com/assets/pdf/jansson2017singing.pdf
69
+ - [2] Takahashi et al., "Multi-scale Multi-band DenseNets for Audio Source Separation", https://arxiv.org/pdf/1706.09588.pdf
70
+ - [3] Takahashi et al., "MMDENSELSTM: AN EFFICIENT COMBINATION OF CONVOLUTIONAL AND RECURRENT NEURAL NETWORKS FOR AUDIO SOURCE SEPARATION", https://arxiv.org/pdf/1805.02410.pdf
71
+ - [4] Choi et al., "PHASE-AWARE SPEECH ENHANCEMENT WITH DEEP COMPLEX U-NET", https://openreview.net/pdf?id=SkeRTsAcYm
72
+ - [5] Jansson et al., "Learned complex masks for multi-instrument source separation", https://arxiv.org/pdf/2103.12864.pdf
73
+ - [6] Liutkus et al., "The 2016 Signal Separation Evaluation Campaign", Latent Variable Analysis and Signal Separation - 12th International Conference