How to finetune the w2v-bert2.0 with multi-GPUs?
#25 opened 10 days ago
by
kssmmm
How to use n-gram with this model?
#24 opened about 2 months ago
by
zkarapet00
Link to "SeamlessM4T v1" paper, where the w2v-BERT 2.0 was presented for the first time.
#23 opened about 2 months ago
by
zuazo
How to use an LM like n-gram LM with w2v-bert-2.0?
2
#22 opened about 2 months ago
by
lukarape
Clarification: w2v-BERT 2.0 was first presented in SeamlessM4T v1 (not v2)
2
#21 opened 2 months ago
by
zuazo
fine tuning not working .
1
#20 opened 3 months ago
by
Imran1
Any quantization possible?
1
#18 opened 3 months ago
by
supercharge19
Is it helpful for TTS? Is it possible to achieve the performance of OpenAI TTS?
1
#17 opened 3 months ago
by
shawnsh
replace AutoProcesser with AutoFeatureExtractor
2
#16 opened 3 months ago
by
talipturkmen
tokenizer issues
2
#15 opened 3 months ago
by
Imran1
Update README.md
#5 opened 4 months ago
by
longnv