Spaces:
Running
on
CPU Upgrade
Adapter Finetune Format
Hi, I submitted a peft-finetuned model in adapter form a day ago, and saw that it was repeatedly changed from pending to finished in the requests dataset automatically before another version was (I assume) manually tested by devs and added to results. I plan on eventually submitting another version of the model after fixing some training errors leading to generally poor performance, and was wondering if there is anything that could be done on my end to make sure the submission goes more smoothly this time to avoid potential burdens on HF developers etc.
For reference, the adapter files (adapter_config.json and adapter_model.bin) were generated through autotrain-advanced and uploaded manually to the repo, and were successfully used for inference locally, so off the top of my head, my guess would be (previously) invalid merged weights uploaded to the same repo being loaded and causing some sort of error despite the submission being specified as an adapter, but I'm not sure.
Thank you!
Hi @julianweng , yes we had some issue evaluating smoothly some peft models, so I had to manually change the status of some model and re-run them. The model appearing in the leaderboard is your model with Adapter weight applied. We need to upload it to the HUB for the evaluation to work. We are looking for a way to point to your model instead of the new model with weights applied.
Thanks for your patience !
@SaylorTwift Thanks for your response! Am I free to submit additional peft models for evaluation or should I hold off for now?
@julianweng I think you should be able to submit PEFT models without any issue, so sorry for not answering sooner!