--- tags: - DiffSVC - pre-trained_model - basemodel - diff-svc license: "gpl" datasets: - 512rc_50k - 512rc_80k - 512rc_100k --- **English** | [简体中文](./README_CN.md) # DiffSVCBaseModel A Diff-SVC base model for all kind of voice ## How to use? 1. Choose and download this model 2. Fill your config and put your datasets into ```(diffsvc-root)/data/raw/{speaker_name}/``` 3. Throw this base model(only .ckpt file) into ```(diffsvc-root)/checkpoints/{speaker_name}``` 4. Then start preprocessing and training as usual ## How much data do you use? I use 2 public datasets(opencpop ,m4singer),40h+ audio in total. ## I want to train my own base model! OK, you can download [this bianry file](./BaseModelBinary.tar.gz). ## Download ** Please choose a model that matches your config.yaml or config_nsf.yaml ** | Version | URL | Reference value of lr | | -------------- | ------------------------------------ | --------------------- | | 384rc,50k_step | [Click here](./384rc_50k_step.zip) | 0.0016 | | 384rc,80k_step | [Click here](./384rc_80k_step.zip) | 0.0032 | | 384rc,100k_step | [Click here](./384rc_100k_step.zip) | 0.0032 | > rc: residual_channels More coming soon... ## Repos | Repo | URL | | -------------------------------------------------------- | ------------------------------------------------------------------- | | Diff-SVC | [Click here](https://github.com/prophesier/diff-svc) | | 44.1KHz Vocoder | [Click here](https://openvpi.github.io/vocoders) | | M4Singer | [Click here](https://github.com/M4Singer/M4Singer) | | OpenCPOP | [Click here](https://github.com/wenet-e2e/opencpop) | | Pre-trained_Models(My friend's pre-trained model) | [Click here](https://huggingface.co/Erythrocyte/Pre-trained_Models) |