Datasets:
OALL
/

License:

Delete kevinpro/Hydra-LLaMA3-8B-0513-preview_eval_request_False_float32_Original.json

#14

see https://huggingface.co/datasets/OALL/requests/discussions/13

the float32-run seems to be holding up other models that ran in the same time frame, for this leaderboard
Is it okay it be cancelled, it's been more than a month, and the difference between float16 and bfloat16 is typically ~1% or less on the huggingface leaderboard, unless it's because a model fails to run in lower precision settings, why do a float32 run, unless for research purposes?, but given compute limitations for not doing it on a 8B model may be a bit costly...

(edit: correction - this board supports submitting float32, unlike some others, but still, not sure if large-precision should be done for large models, since it would take a long time)
I don't what precision the model was natively, the owner deleted it, but if it was float32, the perhaps best lower precision option would be bf16 as it avoids exponent overflows which may result in NaN's.

addressed

CombinHorizon changed pull request status to closed

Sign up or log in to comment