qg_T5_triviaqa / README.md
eladven's picture
Evaluation results for RERobbins/qg_T5_triviaqa model as a base model for other tasks
9746702
|
raw
history blame
2.67 kB

RERobbins/qg_T5_triviaqa model

This model is based on google/t5-v1_1-base pretrained model.

Model Recycling

Evaluation on 36 datasets using RERobbins/qg_T5_triviaqa as a base model yields average score of 76.82 in comparison to 68.82 by google/t5-v1_1-base.

The model is ranked 3rd among all tested models for the google/t5-v1_1-base architecture as of 18/01/2023 Results:

20_newsgroup ag_news amazon_reviews_multi anli boolq cb cola copa dbpedia esnli financial_phrasebank imdb isear mnli mrpc multirc poem_sentiment qnli qqp rotten_tomatoes rte sst2 sst_5bins stsb trec_coarse trec_fine tweet_ev_emoji tweet_ev_emotion tweet_ev_hate tweet_ev_irony tweet_ev_offensive tweet_ev_sentiment wic wnli wsc yahoo_answers
87.6261 89.7333 67.14 53.1563 74.0979 82.1429 79.0029 56 77.6333 90.7471 86.1 93.8 72.4902 88.1204 87.0098 57.2814 87.5 93.3004 90.1187 90.1501 72.2022 94.2661 56.8778 87.2745 98 91.8 46.95 81.6327 53.367 76.6582 85.5814 71.0029 66.6144 49.2958 56.7308 74.0667

For more information, see: Model Recycling