chore: replace Vevo v1 weights with Vevo2 (RMSnow/Vevo2 inference subset)
Browse filesWipe the misfiled Vevo v1 layout and upload the inference-only files from RMSnow/Vevo2: AR (posttrained), FM (fm_emilia101k_singnet7k_repa), Vocos vocoder, and content-style + prosody tokenizers. README rewritten to match. Training artifacts and the _text/pretrained variants are intentionally dropped.
- .gitattributes +1 -0
- LICENSE +0 -22
- README.md +37 -38
- acoustic_modeling/fm_emilia101k_singnet7k_repa/config.json +62 -0
- acoustic_modeling/{Vq8192ToMels → fm_emilia101k_singnet7k_repa}/model.safetensors +2 -2
- tokenizer/vq32/hubert_large_l18_mean_std.npz → acoustic_modeling/fm_emilia101k_singnet7k_repa/whisper_stats.pt +2 -2
- config.json +0 -5
- contentstyle_modeling/Vq32ToVq8192/model.safetensors +0 -3
- contentstyle_modeling/posttrained/added_tokens.json +0 -0
- contentstyle_modeling/posttrained/amphion_config.json +73 -0
- contentstyle_modeling/posttrained/config.json +29 -0
- contentstyle_modeling/posttrained/generation_config.json +14 -0
- contentstyle_modeling/posttrained/merges.txt +0 -0
- contentstyle_modeling/{PhoneToVq8192 → posttrained}/model.safetensors +2 -2
- contentstyle_modeling/posttrained/special_tokens_map.json +0 -0
- tokenizer/vq32/hubert_large_l18_c32.pkl → contentstyle_modeling/posttrained/tokenizer.json +2 -2
- contentstyle_modeling/posttrained/tokenizer_config.json +0 -0
- contentstyle_modeling/posttrained/vocab.json +0 -0
- tokenizer/{vq8192 → contentstyle_fvq16384_12.5hz}/model.safetensors +2 -2
- tokenizer/prosody_fvq512_6.25hz/model.safetensors +3 -0
- tokenizer/vq32/hubert_large_l18_c32.yaml +0 -30
- vocoder/config.json +84 -0
- {acoustic_modeling/Vocoder → vocoder}/model.safetensors +1 -1
- {acoustic_modeling/Vocoder → vocoder}/model_1.safetensors +1 -1
- {acoustic_modeling/Vocoder → vocoder}/model_2.safetensors +1 -1
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
contentstyle_modeling/posttrained/tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
LICENSE
DELETED
|
@@ -1,22 +0,0 @@
|
|
| 1 |
-
MIT License
|
| 2 |
-
|
| 3 |
-
Copyright (c) 2024 OpenMMLab (Amphion)
|
| 4 |
-
Mirrored by AEmotionStudio under original license terms.
|
| 5 |
-
|
| 6 |
-
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 7 |
-
of this software and associated documentation files (the "Software"), to deal
|
| 8 |
-
in the Software without restriction, including without limitation the rights
|
| 9 |
-
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 10 |
-
copies of the Software, and to permit persons to whom the Software is
|
| 11 |
-
furnished to do so, subject to the following conditions:
|
| 12 |
-
|
| 13 |
-
The above copyright notice and this permission notice shall be included in all
|
| 14 |
-
copies or substantial portions of the Software.
|
| 15 |
-
|
| 16 |
-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 17 |
-
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 18 |
-
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 19 |
-
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 20 |
-
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 21 |
-
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 22 |
-
SOFTWARE.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -5,75 +5,74 @@ tags:
|
|
| 5 |
- voice-conversion
|
| 6 |
- singing-voice
|
| 7 |
- speech-synthesis
|
|
|
|
| 8 |
- vevo2
|
| 9 |
- amphion
|
| 10 |
- safetensors
|
| 11 |
- maestraea
|
| 12 |
pipeline_tag: audio-to-audio
|
| 13 |
-
base_model:
|
| 14 |
---
|
| 15 |
|
| 16 |
# Vevo2 Models (Mæstræa Mirror)
|
| 17 |
|
| 18 |
-
**Singing Voice Synthesis
|
| 19 |
|
| 20 |
-
[Original
|
| 21 |
|
| 22 |
-
>
|
| 23 |
|
| 24 |
## What's in This Repo
|
| 25 |
|
| 26 |
| Path | Description | Size |
|
| 27 |
|------|-------------|------|
|
| 28 |
-
| `contentstyle_modeling/
|
| 29 |
-
| `
|
| 30 |
-
| `acoustic_modeling/
|
| 31 |
-
| `
|
| 32 |
-
| `tokenizer/
|
| 33 |
-
| `tokenizer/
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
**Total: ~
|
|
|
|
|
|
|
| 36 |
|
| 37 |
## What Vevo2 Does
|
| 38 |
|
| 39 |
-
Vevo2 is a
|
| 40 |
|
| 41 |
-
- **
|
| 42 |
-
- **
|
| 43 |
-
- **
|
| 44 |
-
- **
|
|
|
|
|
|
|
| 45 |
|
| 46 |
### Architecture
|
| 47 |
|
| 48 |
-
- **AR Model** (Qwen2.5-0.5B) —
|
| 49 |
-
- **
|
| 50 |
-
- **Vocos Vocoder** (~250M) —
|
| 51 |
-
- **
|
| 52 |
|
| 53 |
### VRAM Requirements
|
| 54 |
|
| 55 |
-
| Reference Length | VRAM |
|
| 56 |
-
|-----------------|------|
|
| 57 |
-
|
|
| 58 |
-
|
|
| 59 |
-
|
|
| 60 |
|
| 61 |
-
|
| 62 |
|
| 63 |
-
## Usage
|
| 64 |
|
| 65 |
-
|
| 66 |
|
| 67 |
-
|
| 68 |
-
~/.maestraea/models/vevo2/
|
| 69 |
-
```
|
| 70 |
|
| 71 |
## License
|
| 72 |
|
| 73 |
-
MIT
|
| 74 |
-
|
| 75 |
-
## Credits
|
| 76 |
-
|
| 77 |
-
- **Model**: [Amphion Vevo2](https://github.com/open-mmlab/Amphion/tree/main/models/vc/vevo2)
|
| 78 |
-
- **Paper**: See [Amphion repository](https://github.com/open-mmlab/Amphion) for citation
|
| 79 |
-
- **Mirror by**: [AEmotionStudio](https://huggingface.co/AEmotionStudio)
|
|
|
|
| 5 |
- voice-conversion
|
| 6 |
- singing-voice
|
| 7 |
- speech-synthesis
|
| 8 |
+
- text-to-speech
|
| 9 |
- vevo2
|
| 10 |
- amphion
|
| 11 |
- safetensors
|
| 12 |
- maestraea
|
| 13 |
pipeline_tag: audio-to-audio
|
| 14 |
+
base_model: RMSnow/Vevo2
|
| 15 |
---
|
| 16 |
|
| 17 |
# Vevo2 Models (Mæstræa Mirror)
|
| 18 |
|
| 19 |
+
**Speech & Singing Voice Synthesis · Conversion · Editing · Style Transfer · Melody Control**
|
| 20 |
|
| 21 |
+
[Original Weights](https://huggingface.co/RMSnow/Vevo2) by [RMSnow](https://huggingface.co/RMSnow) · [Source Code](https://github.com/open-mmlab/Amphion/tree/main/models/svc/vevo2) by [OpenMMLab / Amphion](https://github.com/open-mmlab/Amphion) · MIT License
|
| 22 |
|
| 23 |
+
> Mirror of the inference-only files from `RMSnow/Vevo2`, packaged for use with the [Mæstræa AI Workstation](https://github.com/AEmotionStudio/Maestraea). Training artifacts (`optimizer.pt`, `scheduler.pt`, `rng_state_*.pth`, `trainer_state.json`, `training_args.bin`, the `_text` FM variant, and the `pretrained` AR baseline) are dropped to keep the download lean. All credit for the model itself goes to the upstream authors.
|
| 24 |
|
| 25 |
## What's in This Repo
|
| 26 |
|
| 27 |
| Path | Description | Size |
|
| 28 |
|------|-------------|------|
|
| 29 |
+
| `contentstyle_modeling/posttrained/model.safetensors` | AR transformer (Qwen2.5-0.5B post-trained) | ~970 MB |
|
| 30 |
+
| `acoustic_modeling/fm_emilia101k_singnet7k_repa/model.safetensors` | Flow-Matching transformer (~350M params) | ~1.4 GB |
|
| 31 |
+
| `acoustic_modeling/fm_emilia101k_singnet7k_repa/whisper_stats.pt` | Per-channel mean/std for normed Whisper features | ~12 KB |
|
| 32 |
+
| `vocoder/model*.safetensors` | Vocos vocoder (~250M, sharded) | ~1.2 GB |
|
| 33 |
+
| `tokenizer/contentstyle_fvq16384_12.5hz/model.safetensors` | Content-style tokenizer (FVQ16384 @ 12.5 Hz) | ~234 MB |
|
| 34 |
+
| `tokenizer/prosody_fvq512_6.25hz/model.safetensors` | Prosody tokenizer (FVQ512 @ 6.25 Hz) | ~261 MB |
|
| 35 |
+
| `contentstyle_modeling/posttrained/{tokenizer.json, vocab.json, …}` | AR text tokenizer + configs | ~22 MB |
|
| 36 |
+
| `*/config.json`, `amphion_config.json`, etc. | Per-component configs | small |
|
| 37 |
|
| 38 |
+
**Total: ~4 GB**
|
| 39 |
+
|
| 40 |
+
> Whisper-medium (~1.5 GB, used by the content-style tokenizer at inference time) is **not** mirrored here — `openai-whisper` will pull it to `~/.cache/whisper` on first run.
|
| 41 |
|
| 42 |
## What Vevo2 Does
|
| 43 |
|
| 44 |
+
Vevo2 is a unified, controllable speech-and-singing voice generation system from the Amphion toolkit. The Mæstræa panel exposes six task tabs that all route to the same backend pipeline:
|
| 45 |
|
| 46 |
+
- **Convert** — voice/timbre conversion, FM-only (fastest)
|
| 47 |
+
- **TTS** — zero-shot text-to-speech / text-to-singing from a short reference clip
|
| 48 |
+
- **Edit** — rewrite words while preserving voice, melody, prosody, and style
|
| 49 |
+
- **Style** — singing style transfer (e.g. breathy → vibrato, pop → opera) preserving voice + melody
|
| 50 |
+
- **Melody** — sing target lyrics over a humming, whistled, or instrumental melody
|
| 51 |
+
- **SVC** — full singing voice conversion via the AR + FM pipeline (deeper than Convert)
|
| 52 |
|
| 53 |
### Architecture
|
| 54 |
|
| 55 |
+
- **AR Model** (Qwen2.5-0.5B post-trained) — autoregressive content-style modeling
|
| 56 |
+
- **Flow-Matching Transformer** (~350M) — acoustic generation
|
| 57 |
+
- **Vocos Vocoder** (~250M) — high-quality 24 kHz waveform synthesis
|
| 58 |
+
- **Content-style + Prosody Tokenizers** — FVQ codecs over Whisper / chromagram features
|
| 59 |
|
| 60 |
### VRAM Requirements
|
| 61 |
|
| 62 |
+
| Reference Length | VRAM (GPU, FP16) |
|
| 63 |
+
|------------------|------------------|
|
| 64 |
+
| 15 s | ~6 GB |
|
| 65 |
+
| 30 s | ~10 GB |
|
| 66 |
+
| 45 s | ~12 GB |
|
| 67 |
|
| 68 |
+
Recommendation: keep the timbre reference between 15–45 s. Longer references buy more identity fidelity at a real VRAM cost.
|
| 69 |
|
| 70 |
+
## Usage in Mæstræa
|
| 71 |
|
| 72 |
+
The Mæstræa runner clones [open-mmlab/Amphion](https://github.com/open-mmlab/Amphion) to `~/.maestraea/libs/amphion/` on first model load and imports the pipeline from `models.svc.vevo2.vevo2_utils.Vevo2InferencePipeline`. The download manager pulls these weights to `~/.maestraea/models/vevo2/` and the runner resolves checkpoints from that directory at the paths shown in the table above.
|
| 73 |
|
| 74 |
+
For the standalone Amphion path, see the [upstream Vevo2 README](https://github.com/open-mmlab/Amphion/blob/main/models/svc/vevo2/README.md).
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## License
|
| 77 |
|
| 78 |
+
MIT, inherited from upstream. Commercial use of generated audio is permitted. Don't clone someone's voice without their consent.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
acoustic_modeling/fm_emilia101k_singnet7k_repa/config.json
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"model_type": "FlowMatchingTransformer",
|
| 3 |
+
"preprocess": {
|
| 4 |
+
"hop_size": 480,
|
| 5 |
+
"sample_rate": 24000,
|
| 6 |
+
"n_fft": 1920,
|
| 7 |
+
"num_mels": 128,
|
| 8 |
+
"win_size": 1920,
|
| 9 |
+
"fmin": 0,
|
| 10 |
+
"fmax": 12000,
|
| 11 |
+
"mel_var": 8.14,
|
| 12 |
+
"mel_mean": -4.92,
|
| 13 |
+
"f0_fmin": 50.0,
|
| 14 |
+
"f0_fmax": 1100.0,
|
| 15 |
+
"load_phone": false,
|
| 16 |
+
"load_chromagram": true,
|
| 17 |
+
"load_semantic_features": true,
|
| 18 |
+
},
|
| 19 |
+
"model": {
|
| 20 |
+
"flow_matching_transformer": {
|
| 21 |
+
"mel_dim": 128,
|
| 22 |
+
"hidden_size": 1024,
|
| 23 |
+
"num_layers": 16,
|
| 24 |
+
"num_heads": 16,
|
| 25 |
+
"cfg_scale": 0.2,
|
| 26 |
+
"use_cond_code": true, // false means Hidden features
|
| 27 |
+
// "cond_dim": 1024, // HuBERT features dimension
|
| 28 |
+
"cond_codebook_size": 16384, // VQ Codebook Size
|
| 29 |
+
"cond_scale_factor": 4, // 1 means not use ReTrans. 4 means 12.5Hz * 4 = 50Hz. This should be aligned with the frame rate with Mels
|
| 30 |
+
"sigma": 1e-5,
|
| 31 |
+
"time_scheduler": "cos",
|
| 32 |
+
"whisper_perturb": false,
|
| 33 |
+
"repa": {
|
| 34 |
+
"layer_index": 5, // Use the Wav2Vec2Bert features to align. 5 means the 6th layer.
|
| 35 |
+
"output_dim": 1024, // The dimension of the Wav2Vec2Bert features.
|
| 36 |
+
"loss_type": "cos", // "cos" or "l1". By default is "l1"
|
| 37 |
+
},
|
| 38 |
+
},
|
| 39 |
+
"cond_sample_rate": 16000, // whisper: 16000
|
| 40 |
+
"coco": {
|
| 41 |
+
"coco_type": "content_style", // content, style, or content_style
|
| 42 |
+
"downsample_rate": 4, // The original frame rate is 50 Hz, downsample to 12.5 Hz
|
| 43 |
+
"codebook_size": 16384,
|
| 44 |
+
"hidden_size": 1024, // Representations Dim
|
| 45 |
+
"codebook_dim": 8,
|
| 46 |
+
"encoder": {
|
| 47 |
+
"vocos_dim": 384,
|
| 48 |
+
"vocos_intermediate_dim": 2048,
|
| 49 |
+
"vocos_num_layers": 12,
|
| 50 |
+
},
|
| 51 |
+
"decoder": {
|
| 52 |
+
"vocos_dim": 384,
|
| 53 |
+
"vocos_intermediate_dim": 2048,
|
| 54 |
+
"vocos_num_layers": 12,
|
| 55 |
+
},
|
| 56 |
+
"use_normed_whisper": true,
|
| 57 |
+
"whisper_stats_path": "models/svc/vevosing/config/whisper_stats.pt",
|
| 58 |
+
"whisper_dim": 1024,
|
| 59 |
+
"chromagram_dim": 24,
|
| 60 |
+
},
|
| 61 |
+
},
|
| 62 |
+
}
|
acoustic_modeling/{Vq8192ToMels → fm_emilia101k_singnet7k_repa}/model.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ef3733b3f92cf8f38e32f6d161f58751247b199cffc15cc1562e76c1289a7186
|
| 3 |
+
size 1451496488
|
tokenizer/vq32/hubert_large_l18_mean_std.npz → acoustic_modeling/fm_emilia101k_singnet7k_repa/whisper_stats.pt
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6117052b3e23e9075a79cc208ecc24328ae2f71a7dd4c9793db7c90a88b4a519
|
| 3 |
+
size 9215
|
config.json
DELETED
|
@@ -1,5 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"download_tracking": {
|
| 3 |
-
"query_files": ["config.json", "*.safetensors"]
|
| 4 |
-
}
|
| 5 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
contentstyle_modeling/Vq32ToVq8192/model.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:83e4984695487d6feeba9664c95375f9487e3326059b7239db1fe220e1d49b1d
|
| 3 |
-
size 1925991368
|
|
|
|
|
|
|
|
|
|
|
|
contentstyle_modeling/posttrained/added_tokens.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
contentstyle_modeling/posttrained/amphion_config.json
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"preprocess": {
|
| 3 |
+
"hop_size": 480,
|
| 4 |
+
"sample_rate": 24000,
|
| 5 |
+
"n_fft": 1920,
|
| 6 |
+
"num_mels": 128,
|
| 7 |
+
"win_size": 1920,
|
| 8 |
+
"fmin": 0,
|
| 9 |
+
"fmax": 12000,
|
| 10 |
+
"mel_var": 8.14,
|
| 11 |
+
"mel_mean": -4.92,
|
| 12 |
+
"f0_fmin": 50.0,
|
| 13 |
+
"f0_fmax": 1100.0,
|
| 14 |
+
"wav_code_frame_rate": 18.75, // Vevo2: 12.5 (Content-Style Code) + 6.25 (Prosody Code) = 18.75
|
| 15 |
+
"min_dur": 1,
|
| 16 |
+
"max_dur": 30,
|
| 17 |
+
"drop_prosody_id_prob": -1, // Dropping prosody ids means the Text-to-CS, while not dropping means the Text+Note-to-CS,
|
| 18 |
+
"pad_token_id": 151643, // <|endoftext|> for Qwen2.5-0.5B-Instruct,
|
| 19 |
+
"eos_token": "<|im_end|>",
|
| 20 |
+
"eos_token_id": 151645, // <|im_end|> for Qwen2.5-0.5B-Instruct,
|
| 21 |
+
// "tokenizer_path": "/mnt/data4/zhangxueyao/SpeechGenerationYC_ckpts/ckpts/vevo2/pretrained/Qwen2.5-0.5B-Instruct-add_prosody_contentstyle"
|
| 22 |
+
},
|
| 23 |
+
"model": {
|
| 24 |
+
// "pretrained_model_path": "/mnt/data4/zhangxueyao/SpeechGenerationYC_ckpts/ckpts/vevo2/pretrained/Qwen2.5-0.5B-Instruct-add_prosody_contentstyle", // Qwen2.5 Model
|
| 25 |
+
// "rl_init_model_path": "/mnt/data4/zhangxueyao/SpeechGenerationYC_ckpts/ckpts/vevo2/llm_dpo/dpo_qwen0.5B_intp2_highsim_3e-5/checkpoint_backup/epoch-0023_step-0027000_loss-0.000961", // DPO Model
|
| 26 |
+
"use_intelligibility_reward": true,
|
| 27 |
+
"use_chromagram_reward": true,
|
| 28 |
+
"use_target_length_reward": true,
|
| 29 |
+
"reward_combination_strategy": "advantage_first", // "reward_first" or "advantage_first"
|
| 30 |
+
"coco_style": {
|
| 31 |
+
"coco_type": "style", // content, style, or content_style
|
| 32 |
+
"downsample_rate": 8, // The original frame rate is 50 Hz, downsample to 6.25 Hz
|
| 33 |
+
"codebook_size": 512,
|
| 34 |
+
"hidden_size": 1024, // Representations Dim
|
| 35 |
+
"codebook_dim": 8,
|
| 36 |
+
"encoder": {
|
| 37 |
+
"vocos_dim": 384,
|
| 38 |
+
"vocos_intermediate_dim": 2048,
|
| 39 |
+
"vocos_num_layers": 12,
|
| 40 |
+
},
|
| 41 |
+
"decoder": {
|
| 42 |
+
"vocos_dim": 384,
|
| 43 |
+
"vocos_intermediate_dim": 2048,
|
| 44 |
+
"vocos_num_layers": 12,
|
| 45 |
+
},
|
| 46 |
+
"use_normed_whisper": true,
|
| 47 |
+
"whisper_stats_path": "models/svc/vevosing/config/whisper_stats.pt",
|
| 48 |
+
"whisper_dim": 1024,
|
| 49 |
+
"chromagram_dim": 24,
|
| 50 |
+
},
|
| 51 |
+
"coco_content_style": {
|
| 52 |
+
"coco_type": "content_style", // content, style, or content_style
|
| 53 |
+
"downsample_rate": 4, // The original frame rate is 50 Hz, downsample to 12.5 Hz
|
| 54 |
+
"codebook_size": 16384,
|
| 55 |
+
"hidden_size": 1024, // Representations Dim
|
| 56 |
+
"codebook_dim": 8,
|
| 57 |
+
"encoder": {
|
| 58 |
+
"vocos_dim": 384,
|
| 59 |
+
"vocos_intermediate_dim": 2048,
|
| 60 |
+
"vocos_num_layers": 12,
|
| 61 |
+
},
|
| 62 |
+
"decoder": {
|
| 63 |
+
"vocos_dim": 384,
|
| 64 |
+
"vocos_intermediate_dim": 2048,
|
| 65 |
+
"vocos_num_layers": 12,
|
| 66 |
+
},
|
| 67 |
+
"use_normed_whisper": true,
|
| 68 |
+
"whisper_stats_path": "models/svc/vevosing/config/whisper_stats.pt",
|
| 69 |
+
"whisper_dim": 1024,
|
| 70 |
+
"chromagram_dim": 24,
|
| 71 |
+
},
|
| 72 |
+
},
|
| 73 |
+
}
|
contentstyle_modeling/posttrained/config.json
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_name_or_path": "epoch-0023_step-0027000_loss-0.000961",
|
| 3 |
+
"architectures": [
|
| 4 |
+
"Qwen2ForCausalLM"
|
| 5 |
+
],
|
| 6 |
+
"attention_dropout": 0.0,
|
| 7 |
+
"bos_token_id": 151643,
|
| 8 |
+
"eos_token_id": 151645,
|
| 9 |
+
"hidden_act": "silu",
|
| 10 |
+
"hidden_size": 896,
|
| 11 |
+
"initializer_range": 0.02,
|
| 12 |
+
"intermediate_size": 4864,
|
| 13 |
+
"max_position_embeddings": 32768,
|
| 14 |
+
"max_window_layers": 21,
|
| 15 |
+
"model_type": "qwen2",
|
| 16 |
+
"num_attention_heads": 14,
|
| 17 |
+
"num_hidden_layers": 24,
|
| 18 |
+
"num_key_value_heads": 2,
|
| 19 |
+
"rms_norm_eps": 1e-06,
|
| 20 |
+
"rope_scaling": null,
|
| 21 |
+
"rope_theta": 1000000.0,
|
| 22 |
+
"sliding_window": null,
|
| 23 |
+
"tie_word_embeddings": true,
|
| 24 |
+
"torch_dtype": "bfloat16",
|
| 25 |
+
"transformers_version": "4.47.1",
|
| 26 |
+
"use_cache": true,
|
| 27 |
+
"use_sliding_window": false,
|
| 28 |
+
"vocab_size": 168565
|
| 29 |
+
}
|
contentstyle_modeling/posttrained/generation_config.json
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token_id": 151643,
|
| 3 |
+
"do_sample": true,
|
| 4 |
+
"eos_token_id": [
|
| 5 |
+
151645,
|
| 6 |
+
151643
|
| 7 |
+
],
|
| 8 |
+
"pad_token_id": 151643,
|
| 9 |
+
"repetition_penalty": 1.1,
|
| 10 |
+
"temperature": 0.7,
|
| 11 |
+
"top_k": 20,
|
| 12 |
+
"top_p": 0.8,
|
| 13 |
+
"transformers_version": "4.47.1"
|
| 14 |
+
}
|
contentstyle_modeling/posttrained/merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
contentstyle_modeling/{PhoneToVq8192 → posttrained}/model.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b056f8b4f0616dd5eb8598a0cc3fe5a4bd9dcdffc2dbe344156a16691496cf16
|
| 3 |
+
size 1017897016
|
contentstyle_modeling/posttrained/special_tokens_map.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer/vq32/hubert_large_l18_c32.pkl → contentstyle_modeling/posttrained/tokenizer.json
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2b4afa3f042a3dd5ddf573c035c68f88810d01d714fe3341054e9688c0eebef7
|
| 3 |
+
size 13891825
|
contentstyle_modeling/posttrained/tokenizer_config.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
contentstyle_modeling/posttrained/vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer/{vq8192 → contentstyle_fvq16384_12.5hz}/model.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bebdcea39e2d0134dbcce193aed0e7b0d393a866346317cd196389e40e9151b0
|
| 3 |
+
size 244781696
|
tokenizer/prosody_fvq512_6.25hz/model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:754ba5928dfe4dd3c4b6e91ce396e2a50cff52937053c0e3e4ac3a854a4b8ae4
|
| 3 |
+
size 273642504
|
tokenizer/vq32/hubert_large_l18_c32.yaml
DELETED
|
@@ -1,30 +0,0 @@
|
|
| 1 |
-
bias: true
|
| 2 |
-
code_dim: 1024
|
| 3 |
-
codebook_num: 1
|
| 4 |
-
codebook_size: 32
|
| 5 |
-
dec_block_dilations:
|
| 6 |
-
- 1
|
| 7 |
-
- 1
|
| 8 |
-
dec_block_kernel_size: 3
|
| 9 |
-
dec_kernel_size: 3
|
| 10 |
-
dec_ratios:
|
| 11 |
-
- 1
|
| 12 |
-
- 1
|
| 13 |
-
dec_strides:
|
| 14 |
-
- 1
|
| 15 |
-
- 1
|
| 16 |
-
decode_channels: 1024
|
| 17 |
-
enc_block_dilations:
|
| 18 |
-
- 1
|
| 19 |
-
- 1
|
| 20 |
-
enc_block_kernel_size: 3
|
| 21 |
-
enc_kernel_size: 3
|
| 22 |
-
enc_ratios:
|
| 23 |
-
- 1
|
| 24 |
-
- 1
|
| 25 |
-
enc_strides:
|
| 26 |
-
- 1
|
| 27 |
-
- 1
|
| 28 |
-
encode_channels: 1024
|
| 29 |
-
input_channels: 1024
|
| 30 |
-
output_channels: 1024
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
vocoder/config.json
ADDED
|
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"model_type": "Vocoder",
|
| 3 |
+
"preprocess": {
|
| 4 |
+
"hop_size": 480,
|
| 5 |
+
"sample_rate": 24000,
|
| 6 |
+
"max_length": 36000,
|
| 7 |
+
"n_fft": 1920,
|
| 8 |
+
"num_mels": 128,
|
| 9 |
+
"win_size": 1920,
|
| 10 |
+
"fmin": 0,
|
| 11 |
+
"fmax": 12000,
|
| 12 |
+
"mel_var": 8.14,
|
| 13 |
+
"mel_mean": -4.92,
|
| 14 |
+
"load_phone": false,
|
| 15 |
+
"load_chromagram": false
|
| 16 |
+
},
|
| 17 |
+
"model": {
|
| 18 |
+
"vocos": {
|
| 19 |
+
"input_channels": 128,
|
| 20 |
+
"dim": 1024,
|
| 21 |
+
"intermediate_dim": 4096,
|
| 22 |
+
"num_layers": 30,
|
| 23 |
+
"n_fft": 1920,
|
| 24 |
+
"hop_size": 480,
|
| 25 |
+
"padding": "same"
|
| 26 |
+
},
|
| 27 |
+
"period_gan": {
|
| 28 |
+
"max_downsample_channels": 1024,
|
| 29 |
+
"channels": 64,
|
| 30 |
+
"channel_increasing_factor": 2
|
| 31 |
+
},
|
| 32 |
+
"spec_gan": {
|
| 33 |
+
"stft_params": {
|
| 34 |
+
"fft_sizes": [
|
| 35 |
+
128,
|
| 36 |
+
256,
|
| 37 |
+
512,
|
| 38 |
+
1024,
|
| 39 |
+
2048
|
| 40 |
+
],
|
| 41 |
+
"hop_sizes": [
|
| 42 |
+
32,
|
| 43 |
+
64,
|
| 44 |
+
128,
|
| 45 |
+
256,
|
| 46 |
+
512
|
| 47 |
+
],
|
| 48 |
+
"win_lengths": [
|
| 49 |
+
128,
|
| 50 |
+
256,
|
| 51 |
+
512,
|
| 52 |
+
1024,
|
| 53 |
+
2048
|
| 54 |
+
],
|
| 55 |
+
"window": "hann_window"
|
| 56 |
+
},
|
| 57 |
+
"in_channels": 1,
|
| 58 |
+
"out_channels": 1,
|
| 59 |
+
"channels": 64,
|
| 60 |
+
"kernel_sizes": [
|
| 61 |
+
5,
|
| 62 |
+
3
|
| 63 |
+
],
|
| 64 |
+
"max_downsample_channels": 1024,
|
| 65 |
+
"down_scales": [
|
| 66 |
+
2,
|
| 67 |
+
2,
|
| 68 |
+
2
|
| 69 |
+
],
|
| 70 |
+
"use_weight_norm": true,
|
| 71 |
+
"use_complex": false
|
| 72 |
+
}
|
| 73 |
+
},
|
| 74 |
+
"loss": {
|
| 75 |
+
"mel_loss": {
|
| 76 |
+
"sample_rate": 24000
|
| 77 |
+
},
|
| 78 |
+
"disc_loss_weight": 1.0,
|
| 79 |
+
"mel_loss_weight": 10.0,
|
| 80 |
+
"adv_loss_weight": 2.0,
|
| 81 |
+
"fm_loss_weight": 2.0,
|
| 82 |
+
"spec_fm_loss_weight": 1.0
|
| 83 |
+
},
|
| 84 |
+
}
|
{acoustic_modeling/Vocoder → vocoder}/model.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1020206416
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5b5d1a46b19351c9a71bd8a5a59dd16be0be2ddefe70d3a0b4915d9a425e56d3
|
| 3 |
size 1020206416
|
{acoustic_modeling/Vocoder → vocoder}/model_1.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 69768280
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:850799d78699134b969056183fc9d490c51f8d8154d0ed00fae3e738b6b30af6
|
| 3 |
size 69768280
|
{acoustic_modeling/Vocoder → vocoder}/model_2.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 180693296
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:56130fd13d5fbe828e56d61edb0049d35700db0472a866b8167d1d217d2687f8
|
| 3 |
size 180693296
|