Add evaluation results on the adversarialQA config of adversarial_qa

#2
by autoevaluator HF staff - opened

Beep boop, I am a bot from Hugging Face's automatic model evaluator πŸ‘‹!
Your model has been evaluated on the adversarialQA config of the adversarial_qa dataset by @ceyda , using the predictions stored here.
Accept this pull request to see the results displayed on the Hub leaderboard.
Evaluate your model on more datasets here.

deepset org

Hey @ceyda πŸ‘‹
We are going to close this PR for now because we have trained our model on SQuAD data. We think the comparison of our model with models that were trained on the adversarialQA training dataset would be confusing for users. We would however be very interested to see the performance of models that use this one as a baseline and that have been continued being trained with an adversarialQA dataset.

Tuana changed pull request status to closed

Sign up or log in to comment