Spaces:
Running
on
CPU Upgrade
Evaluate fine-tuned LLaMA-7B-GLoRA model
Hi,
We tried evaluating the fine-tuned LLaMA-7B-GLoRA-ShareGPT, but it seems an error is appearing like https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/84.
Could you please help evaluate https://huggingface.co/MBZUAI-LLM/LLaMA-7B-GLoRA-ShareGPT?
GLoRA Paper for reference - https://arxiv.org/abs/2306.07967
Thanks
Hi! Thanks for your interest in the leaderboard.
We don't allow models requiring use_remote_code=True
to be submitted on the cluster (as we don't have the time to manually check the code of all models submitted), but we are working on adding this option, and will announce on Twitter when it's possible!
Hello @clefourrier , thanks a lot for your reply!
I understand you are quite busy, but the only change from the usual llama code is that we set bias=True in our code; everything else is the same.
We believe that GLoRA can be a better alternative to LoRA and have plans to add quantization (like QLoRA) support to GLoRA. Still, for that, we need some validation of the advantage of GLoRA over LoRA, and open-llm-learderboard is a perfect place for the same. Let me know if I can help you in any way to get this model up on the leaderboard.
Regards,
Arnav
Hi
@Arnav0400
,
If you need to just get the scores of your model, you can follow the steps in the About to run the equivalent locally. Best of luck!