Submission doesn't appear to work

#5
by Qubitium - opened

I have tried to submit https://huggingface.co/ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit to the eval and it was never added to the pending list even after many many hours. I have submitted again multiple times since then to see if it will trigger the queue but it appears the queue list is static and nothing is moving in/out of the queues.

Intel org

hi, @Qubitium , the model had been submitted successfully, but the model couldn't been loaded normally with huggingface/transformers api. So we used the same usage from your model card with https://github.com/ModelCloud/GPTQModel to rerun the model. We will update the results soon.

Thanks~~

@ivkaokao Is intel using lm-eval to run the model? If so, it should load fine we tested lm-eval framework with this model.

Also can you tell us which arc-challenge score is used from lm-eval? arc-c or arch-c-norm? lm-eval generates two scores for arch-challenge. We would like to reproduce the scores generated by this benchmark. Thanks.

Intel org
edited Jul 28

@ivkaokao Is intel using lm-eval to run the model? If so, it should load fine we tested lm-eval framework with this model.

Also can you tell us which arc-challenge score is used from lm-eval? arc-c or arch-c-norm? lm-eval generates two scores for arch-challenge. We would like to reproduce the scores generated by this benchmark. Thanks.

yes, we use lm-eval framework. The model can also be evaluated successfully with lm-eval framework if we set 'autogptq=True'.

we use arc-challenge acc.

Thanks

Intel org

hi, @Qubitium the results of model ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit are updated now.

lvkaokao changed discussion status to closed

Sign up or log in to comment