Add evaluation results on the autoevaluate--wmt16-ro-en-sample config and test split of autoevaluate/wmt16-ro-en-sample

#1
by lewtun HF staff - opened
Evaluation on the Hub org

Beep boop, I am a bot from Hugging Face's automatic model evaluator πŸ‘‹!
Your model has been evaluated on the autoevaluate--wmt16-ro-en-sample config and test split of the autoevaluate/wmt16-ro-en-sample dataset by @lewtun , using the predictions stored here.
Accept this pull request to see the results displayed on the Hub leaderboard.
Evaluate your model on more datasets here.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment