xVASynth's xVAPitch (v3) type of voice models based on NVIDIA HIFI NeMo datasets.
Models created by Dan Ruta, origin link:
Dataset supposed origin:
xVAPitch model referenced Papers:
- Multi-head attention with Relative Positional embedding - https://arxiv.org/pdf/1809.04281.pdf
- Transformer with Relative Potional Encoding- https://arxiv.org/abs/1803.02155
- SDP - https://arxiv.org/pdf/2106.06103.pdf
- Spline Flow - https://arxiv.org/abs/1906.04032
ccby_nvidia_hifi_6671_M:
ccby_nvidia_hifi_92_F:
ccby_nv_hifi_11614_F:
ccby_nvidia_hifi_11697_F:
ccby_nvidia_hifi_12787_F:
ccby_nvidia_hifi_6097_M:
ccby_nvidia_hifi_6670_M:
ccby_nvidia_hifi_8051_F:
ccby_nvidia_hifi_9017_M:
ccby_nvidia_hifi_9136_F:
Legal note: Although these datasets are licensed as CC BY 4.0, the base v3 model that these models are fine-tuned from, was pre-trained on non-permissive data.
- Downloads last month
- 0
Inference API (serverless) does not yet support nemo models for this pipeline type.