So far my favorite model. Thanks for sharing!
#4
by
Flanua
- opened
This model is very good at conversations and very good at reasoning probably because it's 30B parameters compared to my old lama model of 13B parameters even though it's bad at coding compared to lama 13B but that's because of datasets and the way it was trained most probably.
P.S: Wizard-Vicuna-30B can read the text from pictures almost immediately compared to Lama 13B model. Lama 13B.. model was unable to read the text from pics correctly at all.
Wish to have a Wizard-Vicuna-65B parameters model or even higher.
And wish to have a higher max context limit from 2048 tokens to at least 4048 and I'm not sure if Wizard-Vicuna-30B has a higher limit than 2048 though.
Thanks for sharing this model.