Failed reason about model

#451
by kyujinpy - opened

Hello! Thank you for your contribution.

Today, I checked my model, kyujinpy/PlatYi-34B-Llama-Q-v2.
However, the above model failed evaluation.

Whats problem?
Could you let me know?

Thank you.

Open LLM Leaderboard org

Hi!
Did you follow the steps in the FAQ and submission? Your model must be uploaded in the safetensors format for example.
Then, can you point us to the request file?

Open LLM Leaderboard org
edited Dec 12, 2023

@Q-bert this job was cancelled, I relaunched it - please open a dedicated issue next time, it's easier for us to keep track that way :)

@clefourrier Thank you for your comment.
But when I checked completed model, it can be possible .bin format.

For example, kyujinpy/PlatYi-34B-200k-Q-FastChat.

Is there another problem?

Open LLM Leaderboard org

Hi! We recommend providing models in the safetensors format to make sure evaluations go well. :)
I'd suggest that you

  1. follow all the steps in the about (= converting your model to safetensors, making sure you can load it with auto model, ...) and
  2. point us to the request file of your model so we can analyse the results (like Qbert did above).

@clefourrier
Thank you for the guideline.!

I finished the above work about kyujinpy/PlatYi-34B-Llama-Q-v2

  1. Change to safe-tensor
  2. Double-check about the loading with code.
  3. Point the request file: Version2 Request_file

Also, I edited the another model kyujinpy/PlatYi-34B-Llama-Q-v3 like above.
Could you also check this model?

Point the request file: Version3 Request_file


Thank you very much!

Open LLM Leaderboard org

Hi!
Thanks a lot for updating this issue :)

I relaunched both of your models, the backend had not managed to download them in their prior version.

clefourrier changed discussion status to closed

Sign up or log in to comment