chjn commited on
Commit
e6b7945
1 Parent(s): 478c636

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -4
README.md CHANGED
@@ -40,18 +40,38 @@ for more information.
40
  ### Training Data
41
 
42
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
43
- All of the training data is extracted from the Windows client of Uma Musume: Pretty Derby using the [umamusume-voice-text-extractor](https://github.com/chinosk6/umamusume-voice-text-extractor)
44
- The copyright of the training dataset belongs to Cygames.
45
- Only the voice is used, the live music soundtrack is not included in the training dataset.
46
 
47
 
48
  ### Training Procedure
49
 
50
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
51
 
 
 
 
 
 
 
 
 
52
  #### Preprocessing
53
 
54
- Navigate to the directory of "so-vits-svc" and execute `python resample.py --skip_loudnorm` .
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
  #### Training Hyperparameters
57
 
 
40
  ### Training Data
41
 
42
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
43
+ All of the training data is extracted from the Windows client of Uma Musume: Pretty Derby using the [umamusume-voice-text-extractor](https://github.com/chinosk6/umamusume-voice-text-extractor).
44
+ The copyright of the training dataset belongs to Cygames.
45
+ Only the voice is used, the live music soundtrack is not included in the training dataset.
46
 
47
 
48
  ### Training Procedure
49
 
50
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
51
 
52
+ #### Training Environment Preparation
53
+ - Download the base models mentioned in [the README.md file of the *so-vits-svc project*](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md).
54
+ *You should download [checkpoint_best_legacy_500.pt](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md#1-if-using-contentvec-as-speech-encoderrecommended)
55
+ , [D_0.pth, G_0.pth](https://huggingface.co/OOPPEENN/so-vits-svc-4.0-pretrained-models/resolve/main/vec768l12_vol_tiny.7z)(for sovits model), [model_0.pt](https://github.com/CNChTu/Diffusion-SVC/blob/Stable/README_en.md#21-pre-training-diffusion-model-which-training-full-depth)(for shallow diffusion)
56
+ , [rmvpe.pt](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md#rmvpe)(for the f0 predictor RMVPE), [model](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md#nsf-hifigan)(for NSF_hifigan).*
57
+ - Place checkpoint_best_legacy_500.pt, rmvpe.pt in .\pretrain, place model and its config.json in .\pretrain\nsf_hifigan, place D_0.pth, G_0.pth in .\logs\44k, place model_0.pt in .\logs\44k\diffusion .
58
+ Credits: The D_0.pth and G_0.pth provided above is from [OOPPEENN](https://huggingface.co/OOPPEENN/so-vits-svc-4.0-pretrained-models).
59
+
60
  #### Preprocessing
61
 
62
+ - Delete all WAV files smaller than 400KB, and copy them to .\dataset_raw\AgnesTachyon
63
+ - Navigate to the directory of "so-vits-svc" and execute `python resample.py --skip_loudnorm` .
64
+ - Execute `python preprocess_flist_config.py --speech_encoder vec768l12 --vol_aug` .
65
+ - Edit the parameters in config.json and diffusion.yaml.
66
+ - Execute `python preprocess_hubert_f0.py --f0_predictor rmvpe --use_diff`
67
+
68
+ #### Training
69
+ - Execute `python train.py -c configs/config.json -m 44k` .
70
+ ##### [Optional]
71
+ - Execute `python train_diff.py -c configs/diffusion.yaml` to train the shallow diffusion model.
72
+ - Execute `python cluster/train_cluster.py --gpu` to train the cluster model.
73
+ - Execute `python train_index.py -c configs/config.json` to train the feature index model.
74
+
75
 
76
  #### Training Hyperparameters
77