Update the transformers to v4.41.1(Support the Phi-3 models)

#2
by MerlinLi - opened

As the title.

Beijing Academy of Artificial Intelligence org

Thanks for your suggestion. We're already updating to the latest ( v4.41.1) transformers and adding Phi-3 models. We'll keep you posted on our progress!

Updating the leaderboard's dependencies  may help it support models such as:  gemma & qwen2

below are some (confusing/misleading) error messages that might be resolved is updated (it does not affect other leaderboards)

Screenshot_2024.06.17_15-49-10.png
Screenshot_2024.06.17_15-50-18.png
Screenshot_2024.06.17_15-51-08.png

migtissera/Tess-v2.5.2-Qwen2-72B
cognitivecomputations/dolphin-2.9.2-qwen2-72b

Low-medium priority: But for some reason, the error message about …remote_code does not occur the main HuggingFace leaderboard
Screenshot_2024.06.17_16-02-01.png

(see https://huggingface.co/datasets/open-llm-leaderboard/requests/commits/main)
(main HuggingFace leaderboard) Screenshot_2024.06.17_16-09-37.png
Screenshot_2024.06.17_16-10-06.png


but cognitivecomputations/dolphin-2.9.2-Phi-3-Medium did submit successfully here

Screenshot_2024.06.17_16-13-37.png

Small question, medium priority: are the models evaluated in the requested precision or in float16? (often the native precision is bfloat16, but check the model's config.json) Would you look into that?

https://huggingface.co/spaces/BAAI/open_cn_llm_leaderboard/blob/main/requirements.txt
says it's still transformers==4.36.0

what part of it is at 4.41.1 ?

Sign up or log in to comment