chjn's picture
Update README.md
e6b7945
---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: audio-to-audio
---
# AgnesTachyon So-vits-svc 4.1 Model
A so-vits-svc 4.1 model of AgnesTachyon in Uma Musume: Pretty Derby.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is a so-vits-svc 4.1 model of AgnesTachyon in Uma Musume: Pretty Derby.
- **Developed by:** [svc-develop-team](https://github.com/svc-develop-team)
- **Trained by:** [70295](https://space.bilibili.com/700776013)
- **Model type:** Audio to Audio
- **License:** [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
- Clone the [so-vits-svc repository](https://github.com/svc-develop-team/so-vits-svc) and install all dependencies.
- Create a new folder named "models" and place the "AgnesTachyon" folder inside it.
- Navigate to the directory of "so-vits-svc" and execute the following command by replacing "xxx.wav" with the name of your source audio file and "x" with the desired key to raise/lower.
```
python inference_main.py -m "models/AgnesTachyon/AgnesTachyon.pth" -c "models/AgnesTachyon/config.json" -n "xxx.wav" -t x -s "AgnesTachyon"
```
Shallow diffusion model, cluster model and feature index model is also provided. Check [the README.md file of the *so-vits-svc project*](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md)
for more information.
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
All of the training data is extracted from the Windows client of Uma Musume: Pretty Derby using the [umamusume-voice-text-extractor](https://github.com/chinosk6/umamusume-voice-text-extractor).
The copyright of the training dataset belongs to Cygames.
Only the voice is used, the live music soundtrack is not included in the training dataset.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Environment Preparation
- Download the base models mentioned in [the README.md file of the *so-vits-svc project*](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md).
*You should download [checkpoint_best_legacy_500.pt](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md#1-if-using-contentvec-as-speech-encoderrecommended)
, [D_0.pth, G_0.pth](https://huggingface.co/OOPPEENN/so-vits-svc-4.0-pretrained-models/resolve/main/vec768l12_vol_tiny.7z)(for sovits model), [model_0.pt](https://github.com/CNChTu/Diffusion-SVC/blob/Stable/README_en.md#21-pre-training-diffusion-model-which-training-full-depth)(for shallow diffusion)
, [rmvpe.pt](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md#rmvpe)(for the f0 predictor RMVPE), [model](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/README.md#nsf-hifigan)(for NSF_hifigan).*
- Place checkpoint_best_legacy_500.pt, rmvpe.pt in .\pretrain, place model and its config.json in .\pretrain\nsf_hifigan, place D_0.pth, G_0.pth in .\logs\44k, place model_0.pt in .\logs\44k\diffusion .
Credits: The D_0.pth and G_0.pth provided above is from [OOPPEENN](https://huggingface.co/OOPPEENN/so-vits-svc-4.0-pretrained-models).
#### Preprocessing
- Delete all WAV files smaller than 400KB, and copy them to .\dataset_raw\AgnesTachyon
- Navigate to the directory of "so-vits-svc" and execute `python resample.py --skip_loudnorm` .
- Execute `python preprocess_flist_config.py --speech_encoder vec768l12 --vol_aug` .
- Edit the parameters in config.json and diffusion.yaml.
- Execute `python preprocess_hubert_f0.py --f0_predictor rmvpe --use_diff`
#### Training
- Execute `python train.py -c configs/config.json -m 44k` .
##### [Optional]
- Execute `python train_diff.py -c configs/diffusion.yaml` to train the shallow diffusion model.
- Execute `python cluster/train_cluster.py --gpu` to train the cluster model.
- Execute `python train_index.py -c configs/config.json` to train the feature index model.
#### Training Hyperparameters
*Please check config.json and diffusion.yaml for training hyperparameters*
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** RTX 3090
- **Hours used:** 41.6
- **Provider:** Myself
- **Compute Region:** Mainland China
- **Carbon Emitted:** ~16.02kg CO2