Artrajz commited on
Commit
749a63c
1 Parent(s): c5ed230

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -622
README.md CHANGED
@@ -1,622 +1,8 @@
1
- <div class="title" align=center>
2
- <h1>vits-simple-api</h1>
3
- <div>Simply call the vits api</div>
4
- <br/>
5
- <br/>
6
- <p>
7
- <img src="https://img.shields.io/github/license/Artrajz/vits-simple-api">
8
- <img src="https://img.shields.io/badge/python-3.9%7C3.10-green">
9
- <a href="https://hub.docker.com/r/artrajz/vits-simple-api">
10
- <img src="https://img.shields.io/docker/pulls/artrajz/vits-simple-api"></a>
11
- </p>
12
- <a href="https://github.com/Artrajz/vits-simple-api/blob/main/README.md">English</a>|<a href="https://github.com/Artrajz/vits-simple-api/blob/main/README_zh.md">中文文档</a>
13
- <br/>
14
- </div>
15
-
16
-
17
-
18
-
19
- # Feature
20
-
21
- - [x] VITS text-to-speech
22
- - [x] VITS voice conversion
23
- - [x] HuBert-soft VITS
24
- - [x] W2V2 VITS / emotional-vits dimensional emotion model
25
- - [x] Support for loading multiple models
26
- - [x] Automatic language recognition and processing,set the scope of language type recognition according to model's cleaner,support for custom language type range
27
- - [x] Customize default parameters
28
- - [x] Long text batch processing
29
- - [x] GPU accelerated inference
30
- - [x] SSML (Speech Synthesis Markup Language) work in progress...
31
-
32
- <details><summary>Update Logs</summary><pre><code>
33
- <h2>2023.5.24</h2>
34
- <p>Added api dimensional_emotion,load mutiple npy from folder.Docker add linux/arm64 and linux/arm64/v8 platforms</p>
35
- <h2>2023.5.15</h2>
36
- <p>Added english_cleaner. To use it, you need to install espeak separately.</p>
37
- <h2>2023.5.12</h2>
38
- <p>Added support for SSML, but still needs improvement. Refactored some functions and changed "speaker_id" to "id" in hubert_vits.</p>
39
- <h2>2023.5.2</h2>
40
- <p>Added support for the w2v2-vits/emotional-vits model, updated the speakers mapping table, and added support for the languages corresponding to the model.</p>
41
- <h2>2023.4.23</h2>
42
- <p>Add API Key authentication, disabled by default, needs to be enabled in config.py.</p>
43
- <h2>2023.4.17</h2>
44
- <p>Added the feature that the cleaner for a single language needs to be annotated to clean, and added GPU acceleration for inference, but the GPU inference environment needs to be manually installed.</p>
45
- <h2>2023.4.12</h2>
46
- <p>Renamed the project from MoeGoe-Simple-API to vits-simple-api, added support for batch processing of long texts, and added a segment threshold "max" for long texts.</p>
47
- <h2>2023.4.7</h2>
48
- <p>Added a configuration file to customize default parameters. This update requires manually updating config.py. See config.py for specific usage.</p>
49
- <h2>2023.4.6</h2>
50
- <p>Added the "auto" option for automatically recognizing the language of the text. Modified the default value of the "lang" parameter to "auto". Automatic recognition still has some defects, please choose manually.</p>
51
- <p>Unified the POST request type as multipart/form-data.</p>
52
- </code></pre></details>
53
-
54
-
55
-
56
- ## demo
57
-
58
- - `https://api.artrajz.cn/py/voice/vits?text=你好,こんにちは&id=142`
59
- - excited:`https://api.artrajz.cn/py/voice/w2v2-vits?text=こんにちは&id=3&emotion=111`
60
- - whispered:`https://api.artrajz.cn/py/voice/w2v2-vits?text=こんにちは&id=3&emotion=2077`
61
-
62
- https://user-images.githubusercontent.com/73542220/237995061-c1f25b4e-dd86-438a-9363-4bb1fe65b425.mov
63
-
64
- The demo server is unstable due to its relatively low configuration.
65
-
66
- # Deploy
67
-
68
- ## Docker
69
-
70
- ### Docker image pull script
71
-
72
- ```
73
- bash -c "$(wget -O- https://raw.githubusercontent.com/Artrajz/vits-simple-api/main/vits-simple-api-installer-latest.sh)"
74
- ```
75
-
76
- - The platforms currently supported by Docker images are `linux/amd64` and `linux/arm64`.
77
- - After a successful pull, the vits model needs to be imported before use. Please follow the steps below to import the model.
78
-
79
- ### Download VITS model
80
-
81
- Put the model into `/usr/local/vits-simple-api/Model`
82
-
83
- <details><summary>Folder structure</summary><pre><code>
84
- │ hubert-soft-0d54a1f4.pt
85
- │ model.onnx
86
- │ model.yaml
87
-
88
- ├─g
89
- │ config.json
90
- │ G_953000.pth
91
-
92
- ├─louise
93
- │ 360_epochs.pth
94
- │ config.json
95
-
96
- ├─Nene_Nanami_Rong_Tang
97
- │ 1374_epochs.pth
98
- │ config.json
99
-
100
- ├─Zero_no_tsukaima
101
- │ 1158_epochs.pth
102
- │ config.json
103
-
104
- └─npy
105
- 25ecb3f6-f968-11ed-b094-e0d4e84af078.npy
106
- all_emotions.npy
107
- </code></pre></details>
108
-
109
-
110
-
111
-
112
-
113
- ### Modify model path
114
-
115
- Modify in `/usr/local/vits-simple-api/config.py`
116
-
117
- <details><summary>config.py</summary><pre><code>
118
- # Fill in the model path here
119
- MODEL_LIST = [
120
- # VITS
121
- [ABS_PATH + "/Model/Nene_Nanami_Rong_Tang/1374_epochs.pth", ABS_PATH + "/Model/Nene_Nanami_Rong_Tang/config.json"],
122
- [ABS_PATH + "/Model/Zero_no_tsukaima/1158_epochs.pth", ABS_PATH + "/Model/Zero_no_tsukaima/config.json"],
123
- [ABS_PATH + "/Model/g/G_953000.pth", ABS_PATH + "/Model/g/config.json"],
124
- # HuBert-VITS (Need to configure HUBERT_SOFT_MODEL)
125
- [ABS_PATH + "/Model/louise/360_epochs.pth", ABS_PATH + "/Model/louise/config.json"],
126
- # W2V2-VITS (Need to configure DIMENSIONAL_EMOTION_NPY)
127
- [ABS_PATH + "/Model/w2v2-vits/1026_epochs.pth", ABS_PATH + "/Model/w2v2-vits/config.json"],
128
- ]
129
- # hubert-vits: hubert soft model
130
- HUBERT_SOFT_MODEL = ABS_PATH + "/Model/hubert-soft-0d54a1f4.pt"
131
- # w2v2-vits: Dimensional emotion npy file
132
- # load single npy: ABS_PATH+"/all_emotions.npy
133
- # load mutiple npy: [ABS_PATH + "/emotions1.npy", ABS_PATH + "/emotions2.npy"]
134
- # load mutiple npy from folder: ABS_PATH + "/Model/npy"
135
- DIMENSIONAL_EMOTION_NPY = ABS_PATH + "/Model/npy"
136
- # w2v2-vits: Need to have both `model.onnx` and `model.yaml` files in the same path.
137
- DIMENSIONAL_EMOTION_MODEL = ABS_PATH + "/Model/model.yaml"
138
- </code></pre></details>
139
-
140
-
141
-
142
-
143
-
144
- ### Startup
145
-
146
- `docker compose up -d`
147
-
148
- Or execute the pull script again
149
-
150
- ### Image update
151
-
152
- Run the docker image pull script again
153
-
154
- ## Virtual environment deployment
155
-
156
- ### Clone
157
-
158
- `git clone https://github.com/Artrajz/vits-simple-api.git`
159
-
160
- ### Download python dependencies
161
-
162
- A python virtual environment is recommended,use python >= 3.9
163
-
164
- `pip install -r requirements.txt`
165
-
166
- Fasttext may not be installed on windows, you can install it with the following command,or download wheels [here](https://www.lfd.uci.edu/~gohlke/pythonlibs/#fasttext)
167
-
168
- ```
169
- #python3.10 win_amd64
170
- pip install https://github.com/Artrajz/archived/raw/main/fasttext/fasttext-0.9.2-cp310-cp310-win_amd64.whl
171
- #python3.9 win_amd64
172
- pip install https://github.com/Artrajz/archived/raw/main/fasttext/fasttext-0.9.2-cp39-cp39-win_amd64.whl
173
- ```
174
-
175
- ### Download VITS model
176
-
177
- Put the model into `/path/to/vits-simple-api/Model`
178
-
179
- <details><summary>Folder structure</summary><pre><code>
180
- │ hubert-soft-0d54a1f4.pt
181
- │ model.onnx
182
- │ model.yaml
183
-
184
- ├─g
185
- │ config.json
186
- │ G_953000.pth
187
-
188
- ├─louise
189
- │ 360_epochs.pth
190
- │ config.json
191
-
192
- ├─Nene_Nanami_Rong_Tang
193
- │ 1374_epochs.pth
194
- │ config.json
195
-
196
- ├─Zero_no_tsukaima
197
- │ 1158_epochs.pth
198
- │ config.json
199
-
200
- └─npy
201
- 25ecb3f6-f968-11ed-b094-e0d4e84af078.npy
202
- all_emotions.npy
203
- </code></pre></details>
204
-
205
-
206
-
207
- ### Modify model path
208
-
209
- Modify in `/path/to/vits-simple-api/config.py`
210
-
211
- <details><summary>config.py</summary><pre><code>
212
- # Fill in the model path here
213
- MODEL_LIST = [
214
- # VITS
215
- [ABS_PATH + "/Model/Nene_Nanami_Rong_Tang/1374_epochs.pth", ABS_PATH + "/Model/Nene_Nanami_Rong_Tang/config.json"],
216
- [ABS_PATH + "/Model/Zero_no_tsukaima/1158_epochs.pth", ABS_PATH + "/Model/Zero_no_tsukaima/config.json"],
217
- [ABS_PATH + "/Model/g/G_953000.pth", ABS_PATH + "/Model/g/config.json"],
218
- # HuBert-VITS (Need to configure HUBERT_SOFT_MODEL)
219
- [ABS_PATH + "/Model/louise/360_epochs.pth", ABS_PATH + "/Model/louise/config.json"],
220
- # W2V2-VITS (Need to configure DIMENSIONAL_EMOTION_NPY)
221
- [ABS_PATH + "/Model/w2v2-vits/1026_epochs.pth", ABS_PATH + "/Model/w2v2-vits/config.json"],
222
- ]
223
- # hubert-vits: hubert soft model
224
- HUBERT_SOFT_MODEL = ABS_PATH + "/Model/hubert-soft-0d54a1f4.pt"
225
- # w2v2-vits: Dimensional emotion npy file
226
- # load single npy: ABS_PATH+"/all_emotions.npy
227
- # load mutiple npy: [ABS_PATH + "/emotions1.npy", ABS_PATH + "/emotions2.npy"]
228
- # load mutiple npy from folder: ABS_PATH + "/Model/npy"
229
- DIMENSIONAL_EMOTION_NPY = ABS_PATH + "/Model/npy"
230
- # w2v2-vits: Need to have both `model.onnx` and `model.yaml` files in the same path.
231
- DIMENSIONAL_EMOTION_MODEL = ABS_PATH + "/Model/model.yaml"
232
- </code></pre></details>
233
-
234
-
235
-
236
- ### Startup
237
-
238
- `python app.py`
239
-
240
- # GPU accelerated
241
-
242
- ## Windows
243
- ### Install CUDA
244
- Check the highest version of CUDA supported by your graphics card:
245
- ```
246
- nvidia-smi
247
- ```
248
- Taking CUDA 11.7 as an example, download it from the [official website](https://developer.nvidia.com/cuda-11-7-0-download-archive?target_os=Windows&amp;target_arch=x86_64&amp;target_version=10&amp;target_type=exe_local)
249
- ### Install GPU version of PyTorch
250
- ```
251
- pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
252
- ```
253
- You can find the corresponding command for the version you need on the [official website](https://pytorch.org/get-started/locally/)
254
- ## Linux
255
- The installation process is similar, but I don't have the environment to test it.
256
-
257
- # Openjtalk Installation Issue
258
-
259
- If you are using an arm64 architecture platform, you may encounter some issues during installation due to the lack of arm64-compatible whl files on the official PyPI website. In such cases, you can use the whl file I have built to install Openjtalk.
260
-
261
- ```
262
- pip install openjtalk==0.3.0.dev2 --index-url https://pypi.artrajz.cn/simple
263
- ```
264
-
265
- Alternatively, you can manually build a whl file by following the instructions in this [tutorial](https://artrajz.cn/index.php/archives/167/).
266
-
267
- # API
268
-
269
- ## GET
270
-
271
- #### speakers list
272
-
273
- - GET http://127.0.0.1:23456/voice/speakers
274
-
275
- Returns the mapping table of role IDs to speaker names.
276
-
277
- #### voice vits
278
-
279
- - GET http://127.0.0.1/voice?text=text
280
-
281
- Default values are used when other parameters are not specified.
282
-
283
- - GET http://127.0.0.1/voice?text=[ZH]text[ZH][JA]text[JA]&lang=mix
284
-
285
- When lang=mix, the text needs to be annotated.
286
-
287
- - GET http://127.0.0.1/voice?text=text&id=142&format=wav&lang=zh&length=1.4
288
-
289
- The text is "text", the role ID is 142, the audio format is wav, the text language is zh, the speech length is 1.4, and the other parameters are default.
290
-
291
- #### check
292
-
293
- - GET http://127.0.0.1:23456/voice/check?id=0&model=vits
294
-
295
- ## POST
296
-
297
- - python
298
-
299
- ```python
300
- import re
301
- import requests
302
- import os
303
- import random
304
- import string
305
- from requests_toolbelt.multipart.encoder import MultipartEncoder
306
-
307
- abs_path = os.path.dirname(__file__)
308
- base = "http://127.0.0.1:23456"
309
-
310
-
311
- # 映射表
312
- def voice_speakers():
313
- url = f"{base}/voice/speakers"
314
-
315
- res = requests.post(url=url)
316
- json = res.json()
317
- for i in json:
318
- print(i)
319
- for j in json[i]:
320
- print(j)
321
- return json
322
-
323
-
324
- # 语音合成 voice vits
325
- def voice_vits(text, id=0, format="wav", lang="auto", length=1, noise=0.667, noisew=0.8, max=50):
326
- fields = {
327
- "text": text,
328
- "id": str(id),
329
- "format": format,
330
- "lang": lang,
331
- "length": str(length),
332
- "noise": str(noise),
333
- "noisew": str(noisew),
334
- "max": str(max)
335
- }
336
- boundary = '----VoiceConversionFormBoundary' + ''.join(random.sample(string.ascii_letters + string.digits, 16))
337
-
338
- m = MultipartEncoder(fields=fields, boundary=boundary)
339
- headers = {"Content-Type": m.content_type}
340
- url = f"{base}/voice"
341
-
342
- res = requests.post(url=url, data=m, headers=headers)
343
- fname = re.findall("filename=(.+)", res.headers["Content-Disposition"])[0]
344
- path = f"{abs_path}/{fname}"
345
-
346
- with open(path, "wb") as f:
347
- f.write(res.content)
348
- print(path)
349
- return path
350
-
351
-
352
- # 语音转换 hubert-vits
353
- def voice_hubert_vits(upload_path, id, format="wav", length=1, noise=0.667, noisew=0.8):
354
- upload_name = os.path.basename(upload_path)
355
- upload_type = f'audio/{upload_name.split(".")[1]}' # wav,ogg
356
-
357
- with open(upload_path, 'rb') as upload_file:
358
- fields = {
359
- "upload": (upload_name, upload_file, upload_type),
360
- "id": str(id),
361
- "format": format,
362
- "length": str(length),
363
- "noise": str(noise),
364
- "noisew": str(noisew),
365
- }
366
- boundary = '----VoiceConversionFormBoundary' + ''.join(random.sample(string.ascii_letters + string.digits, 16))
367
-
368
- m = MultipartEncoder(fields=fields, boundary=boundary)
369
- headers = {"Content-Type": m.content_type}
370
- url = f"{base}/voice/hubert-vits"
371
-
372
- res = requests.post(url=url, data=m, headers=headers)
373
- fname = re.findall("filename=(.+)", res.headers["Content-Disposition"])[0]
374
- path = f"{abs_path}/{fname}"
375
-
376
- with open(path, "wb") as f:
377
- f.write(res.content)
378
- print(path)
379
- return path
380
-
381
-
382
- # 维度情感模型 w2v2-vits
383
- def voice_w2v2_vits(text, id=0, format="wav", lang="auto", length=1, noise=0.667, noisew=0.8, max=50, emotion=0):
384
- fields = {
385
- "text": text,
386
- "id": str(id),
387
- "format": format,
388
- "lang": lang,
389
- "length": str(length),
390
- "noise": str(noise),
391
- "noisew": str(noisew),
392
- "max": str(max),
393
- "emotion": str(emotion)
394
- }
395
- boundary = '----VoiceConversionFormBoundary' + ''.join(random.sample(string.ascii_letters + string.digits, 16))
396
-
397
- m = MultipartEncoder(fields=fields, boundary=boundary)
398
- headers = {"Content-Type": m.content_type}
399
- url = f"{base}/voice/w2v2-vits"
400
-
401
- res = requests.post(url=url, data=m, headers=headers)
402
- fname = re.findall("filename=(.+)", res.headers["Content-Disposition"])[0]
403
- path = f"{abs_path}/{fname}"
404
-
405
- with open(path, "wb") as f:
406
- f.write(res.content)
407
- print(path)
408
- return path
409
-
410
-
411
- # 语音转换 同VITS模型内角色之间的音色转换
412
- def voice_conversion(upload_path, original_id, target_id):
413
- upload_name = os.path.basename(upload_path)
414
- upload_type = f'audio/{upload_name.split(".")[1]}' # wav,ogg
415
-
416
- with open(upload_path, 'rb') as upload_file:
417
- fields = {
418
- "upload": (upload_name, upload_file, upload_type),
419
- "original_id": str(original_id),
420
- "target_id": str(target_id),
421
- }
422
- boundary = '----VoiceConversionFormBoundary' + ''.join(random.sample(string.ascii_letters + string.digits, 16))
423
- m = MultipartEncoder(fields=fields, boundary=boundary)
424
-
425
- headers = {"Content-Type": m.content_type}
426
- url = f"{base}/voice/conversion"
427
-
428
- res = requests.post(url=url, data=m, headers=headers)
429
-
430
- fname = re.findall("filename=(.+)", res.headers["Content-Disposition"])[0]
431
- path = f"{abs_path}/{fname}"
432
-
433
- with open(path, "wb") as f:
434
- f.write(res.content)
435
- print(path)
436
- return path
437
-
438
-
439
- def voice_ssml(ssml):
440
- fields = {
441
- "ssml": ssml,
442
- }
443
- boundary = '----VoiceConversionFormBoundary' + ''.join(random.sample(string.ascii_letters + string.digits, 16))
444
-
445
- m = MultipartEncoder(fields=fields, boundary=boundary)
446
- headers = {"Content-Type": m.content_type}
447
- url = f"{base}/voice/ssml"
448
-
449
- res = requests.post(url=url, data=m, headers=headers)
450
- fname = re.findall("filename=(.+)", res.headers["Content-Disposition"])[0]
451
- path = f"{abs_path}/{fname}"
452
-
453
- with open(path, "wb") as f:
454
- f.write(res.content)
455
- print(path)
456
- return path
457
-
458
- def voice_dimensional_emotion(upload_path):
459
- upload_name = os.path.basename(upload_path)
460
- upload_type = f'audio/{upload_name.split(".")[1]}' # wav,ogg
461
-
462
- with open(upload_path, 'rb') as upload_file:
463
- fields = {
464
- "upload": (upload_name, upload_file, upload_type),
465
- }
466
- boundary = '----VoiceConversionFormBoundary' + ''.join(random.sample(string.ascii_letters + string.digits, 16))
467
-
468
- m = MultipartEncoder(fields=fields, boundary=boundary)
469
- headers = {"Content-Type": m.content_type}
470
- url = f"{base}/voice/dimension-emotion"
471
-
472
- res = requests.post(url=url, data=m, headers=headers)
473
- fname = re.findall("filename=(.+)", res.headers["Content-Disposition"])[0]
474
- path = f"{abs_path}/{fname}"
475
-
476
- with open(path, "wb") as f:
477
- f.write(res.content)
478
- print(path)
479
- return path
480
- ```
481
-
482
- ## API KEY
483
-
484
- Set `API_KEY_ENABLED = True` in `config.py` to enable API key authentication. The API key is `API_KEY = "api-key"`.
485
- After enabling it, you need to add the `api_key` parameter in GET requests and add the `X-API-KEY` parameter in the header for POST requests.
486
-
487
- # Parameter
488
-
489
- ## VITS
490
-
491
- | Name | Parameter | Is must | Default | Type | Instruction |
492
- | ---------------------- | --------- | ------- | ------- | ----- | ------------------------------------------------------------ |
493
- | Synthesized text | text | true | | str | |
494
- | Role ID | id | false | 0 | int | |
495
- | Audio format | format | false | wav | str | Support for wav,ogg,silk |
496
- | Text language | lang | false | auto | str | The language of the text to be synthesized. Available options include auto, zh, ja, and mix. When lang=mix, the text should be wrapped in [ZH] or [JA].The default mode is auto, which automatically detects the language of the text |
497
- | Audio length | length | false | 1.0 | float | Adjusts the length of the synthesized speech, which is equivalent to adjusting the speed of the speech. The larger the value, the slower the speed. |
498
- | Noise | noise | false | 0.667 | float | |
499
- | Noise Weight | noisew | false | 0.8 | float | |
500
- | Segmentation threshold | max | false | 50 | int | Divide the text into paragraphs based on punctuation marks, and combine them into one paragraph when the length exceeds max. If max<=0, the text will not be divided into paragraphs. |
501
-
502
- ## VITS voice conversion
503
-
504
- | Name | Parameter | Is must | Default | Type | Instruction |
505
- | -------------- | ----------- | ------- | ------- | ---- | --------------------------------------------------------- |
506
- | Uploaded Audio | upload | true | | file | The audio file to be uploaded. It should be in wav or ogg |
507
- | Source Role ID | original_id | true | | int | The ID of the role used to upload the audio file. |
508
- | Target Role ID | target_id | true | | int | The ID of the target role to convert the audio to. |
509
-
510
- ## HuBert-VITS
511
-
512
- | Name | Parameter | Is must | Default | Type | Instruction |
513
- | -------------- | --------- | ------- | ------- | ----- | ------------------------------------------------------------ |
514
- | Uploaded Audio | upload | true | | file | he audio file to be uploaded. It should be in wav or ogg format. |
515
- | Target Role ID | id | true | | int | |
516
- | Audio format | format | true | | str | wav,ogg,silk |
517
- | Audio length | length | true | | float | Adjusts the length of the synthesized speech, which is equivalent to adjusting the speed of the speech. The larger the value, the slower the speed. |
518
- | Noise | noise | true | | float | |
519
- | Noise Weight | noisew | true | | float | |
520
-
521
- ## W2V2-VITS
522
-
523
- | Name | Parameter | Is must | Default | Type | Instruction |
524
- | ---------------------- | --------- | ------- | ------- | ----- | ------------------------------------------------------------ |
525
- | Synthesized text | text | true | | str | |
526
- | Role ID | id | false | 0 | int | |
527
- | Audio format | format | false | wav | str | Support for wav,ogg,silk |
528
- | Text language | lang | false | auto | str | The language of the text to be synthesized. Available options include auto, zh, ja, and mix. When lang=mix, the text should be wrapped in [ZH] or [JA].The default mode is auto, which automatically detects the language of the text |
529
- | Audio length | length | false | 1.0 | float | Adjusts the length of the synthesized speech, which is equivalent to adjusting the speed of the speech. The larger the value, the slower the speed. |
530
- | Noise | noise | false | 0.667 | float | |
531
- | Noise Weight | noisew | false | 0.8 | float | |
532
- | Segmentation threshold | max | false | 50 | int | Divide the text into paragraphs based on punctuation marks, and combine them into one paragraph when the length exceeds max. If max<=0, the text will not be divided into paragraphs. |
533
- | Dimensional emotion | emotion | false | 0 | int | The range depends on the emotion reference file in npy format, such as the range of the [innnky](https://huggingface.co/spaces/innnky/nene-emotion/tree/main)'s model all_emotions.npy, which is 0-5457. |
534
-
535
- ## Dimensional emotion
536
-
537
- | Name | Parameter | Is must | Default | Type | Instruction |
538
- | -------------- | --------- | ------- | ------- | ---- | ------------------------------------------------------------ |
539
- | Uploaded Audio | upload | true | | file | Return the npy file that stores the dimensional emotion vectors. |
540
-
541
- ## SSML (Speech Synthesis Markup Language)
542
-
543
- Supported Elements and Attributes
544
-
545
- `speak` Element
546
-
547
- | Attribute | Instruction | Is must |
548
- | --------- | ------------------------------------------------------------ | ------- |
549
- | id | Default value is retrieved from `config.py` | false |
550
- | lang | Default value is retrieved from `config.py` | false |
551
- | length | Default value is retrieved from `config.py` | false |
552
- | noise | Default value is retrieved from `config.py` | false |
553
- | noisew | Default value is retrieved from `config.py` | false |
554
- | max | Splits text into segments based on punctuation marks. When the sum of segment lengths exceeds `max`, it is treated as one segment. `max<=0` means no segmentation. The default value is 0. | false |
555
- | model | Default is `vits`. Options: `w2v2-vits`, `emotion-vits` | false |
556
- | emotion | Only effective when using `w2v2-vits` or `emotion-vits`. The range depends on the npy emotion reference file. | false |
557
-
558
- `voice` Element
559
-
560
- Higher priority than `speak`.
561
-
562
- | Attribute | Instruction | Is must |
563
- | --------- | ------------------------------------------------------------ | ------- |
564
- | id | Default value is retrieved from `config.py` | false |
565
- | lang | Default value is retrieved from `config.py` | false |
566
- | length | Default value is retrieved from `config.py` | false |
567
- | noise | Default value is retrieved from `config.py` | false |
568
- | noisew | Default value is retrieved from `config.py` | false |
569
- | max | Splits text into segments based on punctuation marks. When the sum of segment lengths exceeds `max`, it is treated as one segment. `max<=0` means no segmentation. The default value is 0. | false |
570
- | model | Default is `vits`. Options: `w2v2-vits`, `emotion-vits` | false |
571
- | emotion | Only effective when using `w2v2-vits` or `emotion-vits` | false |
572
-
573
- `break` Element
574
-
575
- | Attribute | Instruction | Is must |
576
- | --------- | ------------------------------------------------------------ | ------- |
577
- | strength | x-weak, weak, medium (default), strong, x-strong | false |
578
- | time | The absolute duration of a pause in seconds (such as `2s`) or milliseconds (such as `500ms`). Valid values range from 0 to 5000 milliseconds. If you set a value greater than the supported maximum, the service will use `5000ms`. If the `time` attribute is set, the `strength` attribute is ignored. | false |
579
-
580
- | Strength | Relative Duration |
581
- | :------- | :---------------- |
582
- | x-weak | 250 ms |
583
- | weak | 500 ms |
584
- | medium | 750 ms |
585
- | strong | 1000 ms |
586
- | x-strong | 1250 ms |
587
-
588
- Example
589
-
590
- ```xml
591
- <speak lang="zh" format="mp3" length="1.2">
592
- <voice id="92" >这几天心里颇不宁静。</voice>
593
- <voice id="125">今晚在院子里坐着乘凉,忽然想起日日走过的荷塘,在这满月的光里,总该另有一番样子吧。</voice>
594
- <voice id="142">月亮渐渐地升高了,墙外马路上孩子们的欢笑,已经听不见了;</voice>
595
- <voice id="98">妻在屋里拍着闰儿,迷迷糊糊地哼着眠歌。</voice>
596
- <voice id="120">我悄悄地披了大衫,带上门出去。</voice><break time="2s"/>
597
- <voice id="121">沿着荷塘,是一条曲折的小煤屑路。</voice>
598
- <voice id="122">这是一条幽僻的路;白天也少人走,夜晚更加寂寞。</voice>
599
- <voice id="123">荷塘四面,长着许多树,蓊蓊郁郁的。</voice>
600
- <voice id="124">路的一旁,是些杨柳,和一些不知道名字的树。</voice>
601
- <voice id="125">没有月光的晚上,这路上阴森森的,有些怕人。</voice>
602
- <voice id="126">今晚却很好,虽然月光也还是淡淡的。</voice><break time="2s"/>
603
- <voice id="127">路上只我一个人,背着手踱着。</voice>
604
- <voice id="128">这一片天地好像是我的;我也像超出了平常的自己,到了另一个世界里。</voice>
605
- <voice id="129">我爱热闹,也爱冷静;<break strength="x-weak"/>爱群居,也爱独处。</voice>
606
- <voice id="130">像今晚上,一个人在这苍茫的月下,什么都可以想,什么都可以不想,便觉是个自由的人。</voice>
607
- <voice id="131">白天里一定要做的事,一定要说的话,现在都可不理。</voice>
608
- <voice id="132">这是独处的妙处,我且受用这无边的荷香月色好了。</voice>
609
- </speak>
610
- ```
611
-
612
- # Communication
613
-
614
- Learning and communication,now there is only Chinese [QQ group](https://qm.qq.com/cgi-bin/qm/qr?k=-1GknIe4uXrkmbDKBGKa1aAUteq40qs_&jump_from=webapi&authKey=x5YYt6Dggs1ZqWxvZqvj3fV8VUnxRyXm5S5Kzntc78+Nv3iXOIawplGip9LWuNR/)
615
-
616
- # Acknowledgements
617
-
618
- - vits:https://github.com/jaywalnut310/vits
619
- - MoeGoe:https://github.com/CjangCjengh/MoeGoe
620
- - emotional-vits:https://github.com/innnky/emotional-vits
621
- - vits-uma-genshin-honkai:https://huggingface.co/spaces/zomehwh/vits-uma-genshin-honkai
622
-
 
1
+ ---
2
+ license: mit
3
+ title: vits-simple-api
4
+ sdk: gradio
5
+ pinned: true
6
+ python_version: 3.10.11
7
+ emoji: 👀
8
+ ---