Model in wrong parameter class

#601
by appoose - opened

robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit seems to be included in 3B category. It seems to be 7B-Mistral -Instruct based one.

Open LLM Leaderboard org

Hi!
Interesting, I'll check it out, thank you

The model still is featured in the wrong class. Looking at https://huggingface.co/robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit/blob/main/config.json it is a 7B model with 4 bit quantization.

Open LLM Leaderboard org

Yes, it's likely linked to an issue we had earlier with the parameter count of GPTQ models. We are working on it, please be patient.

Open LLM Leaderboard org

Should be good at the next restart!

clefourrier changed discussion status to closed

Sign up or log in to comment