vatsal-metavoice commited on
Commit
fd48778
1 Parent(s): 7dc2704

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md CHANGED
@@ -1,3 +1,71 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
  ---
6
+
7
+ MetaVoice-1B is a 1.2B parameter base model trained on 100K hours of speech for TTS (text-to-speech). It has been built with the following priorities:
8
+ * Emotional speech rhythm and tone in English. No hallucinations.
9
+ * Support for voice cloning with finetuning.
10
+ * We have had success with as little as 1 minute training data for Indian speakers.
11
+ * Zero-shot cloning for American & British voices, with 30s reference audio.
12
+ * Support for long-form synthesis.
13
+
14
+ We’re releasing MetaVoice-1B under the Apache 2.0 license, *it can be used without restrictions*.
15
+
16
+ ## Installation
17
+ ```bash
18
+ # install ffmpeg
19
+ wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz
20
+ wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz.md5
21
+ md5sum -c ffmpeg-git-amd64-static.tar.xz.md5
22
+ tar xvf ffmpeg-git-amd64-static.tar.xz
23
+ sudo mv ffmpeg-git-*-static/ffprobe ffmpeg-git-*-static/ffmpeg /usr/local/bin/
24
+ rm -rf ffmpeg-git-*
25
+
26
+ pip install -r requirements.txt
27
+ pip install -e .
28
+ ```
29
+
30
+ ## Download
31
+ ```
32
+ wget https://cdn.themetavoice.xyz/metavoice-1B-v0.1.tar
33
+ tar -xvf metavoice-1B-v0.1.tar
34
+ ```
35
+
36
+ ## Usage
37
+ 1. [Download it](https://cdn.themetavoice.xyz/metavoice-1B-v0.1.tar) and use it anywhere (including locally) with our [reference implementation](/fam/llm/sample.py),
38
+ ```bash
39
+ python fam/llm/sample.py --model_dir=<PATH_TO_MODEL_DIR> --spk_cond_path=<PATH_TO_TARGET_AUDIO>
40
+ ```
41
+
42
+ 2. Deploy it on any cloud (AWS/GCP/Azure), using our [inference server](/fam/llm/serving.py)
43
+ ```bash
44
+ python fam/llm/serving.py --model_dir=<PATH_TO_MODEL_DIR>
45
+ ```
46
+
47
+ 3. Use it on HuggingFace
48
+
49
+ ## Soon
50
+ - Long form TTS
51
+ - Fine-tuning code
52
+
53
+ ## Architecture
54
+ We predict EnCodec tokens from text, and speaker information. This is then diffused up to the waveform level, with post-processing applied to clean up the audio.
55
+
56
+ * We use a causal GPT to predict the first two hierarchies of EnCodec tokens. Text and audio are part of the LLM context. Speaker information is passed via conditioning at the token embedding layer. This speaker conditioning is obtained from a separately trained speaker verification network.
57
+ - The two hierarchies are predicted in a "flattened interleaved" manner, we predict the first token of the first hierarchy, then the first token of the second hierarchy, then the second token of the first hierarchy, and so on.
58
+ - We use condition-free sampling to boost the cloning capability of the model.
59
+ - The text is tokenised using a custom trained BPE tokeniser with 512 tokens.
60
+ - Note that we've skipped predicting semantic tokens as done in other works, as we found that this isn't strictly necessary.
61
+ * We use a non-causal (encoder-style) transformer to predict the rest of the 6 hierarchies from the first two hierarchies. This is a super small model (~10Mn parameters), and has extensive zero-shot generalisation to most speakers we've tried. Since it's non-causal, we're also able to predict all the timesteps in parallel.
62
+ * We use multi-band diffusion to generate waveforms from the EnCodec tokens. We noticed that the speech is clearer than using the original RVQ decoder or VOCOS. However, the diffusion at waveform level leaves some background artifacts which are quite unpleasant to the ear. We clean this up in the next step.
63
+ * We use DeepFilterNet to clear up the artifacts introduced by the multi-band diffusion.
64
+
65
+ ## Optimizations
66
+ The model supports:
67
+ 1. KV-caching via Flash Decoding
68
+ 2. Batching (including texts of different lengths)
69
+
70
+ ## Contribute
71
+ - See all [active issues](https://github.com/themetavoicexyz/issues)!