Add evaluation results on the default config and test split of phpthinh/data_1

#27
by autoevaluator HF staff - opened

Beep boop, I am a bot from Hugging Face's automatic model evaluator πŸ‘‹!
Your model has been evaluated on the default config and test split of the phpthinh/data_1 dataset by @phpthinh , using the predictions stored here.
Accept this pull request to see the results displayed on the Hub leaderboard.
Evaluate your model on more datasets here.

BigScience Workshop org
β€’
edited Oct 22, 2022

Hey again @phpthinh ! Would you mind closing the PRs whenever you run a private evaluation against the models? It would help tremendously!

I wish I could close the PR and run a private evaluation!
It is impossible for me to close the PR since the PR is created by the tool.
I will contact the team about this issue.
Sorry for the annoyance!

cakiki changed pull request status to closed
BigScience Workshop org
BigScience Workshop org
β€’
edited Oct 23, 2022

No worries, not your fault. If you could do it, that would be great, if you can't, we'll open a feature request so that you can evaluate without having to open a PR I guess.

Hi @phpthinh , excited to see that you're using Evaluation on the Hub for the BLOOM models! We should definitely add a feature to allow people to evaluate models without opening a PR.

In the meantime, a temporary suggestion for mitigating this may be for you to clone the BLOOM model repository to your namespace on the Hub instead β€” then you can run evaluation jobs against it as much as you'd like without pinging the authors!

That would be great!
Thanks for your suggestion @mathemakitten !

Sign up or log in to comment