rvc-genshin-impact / README.md
ArkanDash's picture
feat: added kuki-jp
5a5500c
|
raw
history blame
2.34 kB
---
license: mit
language:
- ja
metrics:
- accuracy
pipeline_tag: audio-to-audio
tags:
- rvc
---
# <center> RVC Genshin Impact Japanese Voice Model<br />
![model-cover.png](https://huggingface.co/ArkanDash/rvc-genshin-impact/resolve/main/model-cover.png)
# About Retrieval based Voice Conversion (RVC)
Learn more about Retrieval based Voice Conversion in this link below:<br />
[RVC WebUI](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
# How to use?
Download the prezipped model and put to your RVC Project.
Model test: [Google Colab](https://colab.research.google.com/drive/110kiMZTdP6Ri1lY9-NbQf17GVPPhHyeT?usp=sharing) / [RVC Models New](https://huggingface.co/spaces/ArkanDash/rvc-models-new) (Which is basically the same but hosted on spaces)
## <center> INFO <br />
Model Created by ArkanDash <br />
The voice that was used in this model belongs to Hoyoverse.
The voice I make to make this model was ripped from the game (3.7). <br />
Total Models: 34 Models (19 V1 Models & 15 V2 Models)
Plans: <br />
- Nahida V2 RVC
- Zhongli V2 RVC
Replace:
- Raiden Shogun model is now replaced with newer dataset due to bad voice from older model, The old model is now deleted.
### V1 Model <br />
This was trained on Original RVC.<br />
Pitch Extract using Harvest.<br />
This model was trained with 100 epochs, 10 batch sizes, and a 40K sample rate (some models had a 48k sample rate).<br />
Every V1 model was trained more or less around 30 minutes of character voice.
I may exclude some models to higher epochs due to the low duration of the character's voice.<br />
- Klee 150 Epochs
- Fischl 150 Epochs
### (New) V2 Model <br />
This was trained on Mangio-Fork RVC.<br />
Pitch Extract using Crepe.<br />
This model was trained with 100 epochs, 8 batch sizes, and a 48K sample rate. (some models had a 40k sample rate).<br />
Every V2 model was trained more or less around 60 minutes of character voice.
Other request:<br />
- Greater Lord Rukkhadevata: 750 Epochs, 16 Batch size, 48k Sample rate. (10 minutes dataset)
- Charlotte: 400 Epochs, 16 Batch size, 48k Sample rate. (18 minutes dataset)
Note:
- For faruzan, somehow the index file is smaller, But it output a log when training here: <br />
`Converged (lack of improvement in inertia) at step 1152/48215` <br />
I might retrain faruzan soon.