Chitsanfei
commited on
Commit
•
77ca67d
1
Parent(s):
d385313
readme: update details
Browse files
app.py
CHANGED
@@ -46,24 +46,26 @@ with app:
|
|
46 |
with gr.Tabs():
|
47 |
with gr.TabItem("Basic"):
|
48 |
gr.Markdown(value="""
|
49 |
-
# sovits-emu-voice-transform |
|
50 |
|
51 |
[![Visitors](https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FMashiroSA%2Fsovits-emu-voice-transform&labelColor=%23f47373&countColor=%23555555)](https://visitorbadge.io/status?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FMashiroSA%2Fsovits-emu-voice-transform)
|
52 |
|
53 |
-
_Modified from public demo based on so-vits-svc 4.0._
|
54 |
-
基于so-vits-svc 4.0的公开demo修改而成。
|
55 |
|
56 |
-
|
57 |
-
所使用的基于角色鳳えむ的对话训练的模型,在对话中具有良好效果,乐音转换欠佳。
|
58 |
-
|
59 |
-
_Only authorized running on huggingface, with free instance conversion is much slower. Please be patient._
|
60 |
-
仅授权在huggingface上运行,运行使用免费实例转换很慢很慢很慢很慢,请耐心等待。
|
61 |
|
62 |
```text
|
63 |
For academic exchange only and not for illegal purposes. We have no relationship or interest with SEGA or related organizations.
|
64 |
The model derivation output is only similar to Otori Emu and there is inevitable loss, which cannot be fully simulated.
|
65 |
If you have any questions, please send an email or forum for inquiry.
|
66 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
""")
|
68 |
spks = list(model.spk2id.keys())
|
69 |
sid = gr.Dropdown(label="音色", choices=spks, value=spks[0])
|
|
|
46 |
with gr.Tabs():
|
47 |
with gr.TabItem("Basic"):
|
48 |
gr.Markdown(value="""
|
49 |
+
# sovits-emu-voice-transform | 可以变成凤笑梦的在线变声器
|
50 |
|
51 |
[![Visitors](https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FMashiroSA%2Fsovits-emu-voice-transform&labelColor=%23f47373&countColor=%23555555)](https://visitorbadge.io/status?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FMashiroSA%2Fsovits-emu-voice-transform)
|
52 |
|
53 |
+
_Modified from public demo based on so-vits-svc 4.0. The dialogue training model based on the role Otori Emu has shown good results in dialogue, however the vocal of music conversion is not as expected. Only authorized running on Hugging Face, with free instance conversion is much slower. Please be patient._
|
|
|
54 |
|
55 |
+
基于so-vits-svc 4.0的公开demo修改而成。所使用的基于角色鳳えむ的对话训练的模型,在对话中具有良好效果,乐音转换欠佳。仅授权在Hugging Face上运行,运行使用免费实例转换很慢很慢很慢很慢,请耐心等待。
|
|
|
|
|
|
|
|
|
56 |
|
57 |
```text
|
58 |
For academic exchange only and not for illegal purposes. We have no relationship or interest with SEGA or related organizations.
|
59 |
The model derivation output is only similar to Otori Emu and there is inevitable loss, which cannot be fully simulated.
|
60 |
If you have any questions, please send an email or forum for inquiry.
|
61 |
```
|
62 |
+
|
63 |
+
*如何使用*
|
64 |
+
- 如果用于日常说话时的对话转换,请提前录制一段低于90s的人声干声,上传,勾选下面的自动f0预测,其它的可以不用动,直接转换,过一会儿就能听到转换的声音了。
|
65 |
+
- 如果是乐曲中的人声,你可以使用自己的清唱,或者使用UVR5软件进行干声提取,上传,不要勾选自动f0预测,按情况进行变调(模型实际测试高于标准音C4的类似度较高,输入的干声是男声请+12,女声可以先不变),然后转换。
|
66 |
+
- 转换后的进度条右侧有个省略的点,在那边可以下载。
|
67 |
+
- 本repo的管理者 @MashiroSA 看不到你输入和输出后的内容,只有Hugging Face官方也许可以看到,请放心。
|
68 |
+
|
69 |
""")
|
70 |
spks = list(model.spk2id.keys())
|
71 |
sid = gr.Dropdown(label="音色", choices=spks, value=spks[0])
|