Impressive

#1
by captainst - opened

Really impressive. The model is able to answer questions based on the software log input of about 2K tokens.
My first test shows that the performance is comparable to bigger models like 13b ones, and sometimes even better.

VMware AI Labs org

Awesome, thank you for letting us know!

VMware AI Labs org

Model performance comparison with before version on the MODEL CARD: https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md

VMware AI Labs org

This model is built on OpenLLaMA, not Llama 2. It was fine-tuned on version 2 of our open-instruct training set.

This model is built on OpenLLaMA, not Llama 2. It was fine-tuned on version 2 of our open-instruct training set.

Oh sorry, I mixed them. Just because LLaMA2 is too hot yesterday.

Sign up or log in to comment