eladven commited on
Commit
fff7c73
1 Parent(s): 11b5b39

Evaluation results for jmassot/bert-base-uncased-issues-128 model as a base model for other tasks

Browse files

As part of a research effort to identify high quality models in Huggingface that can serve as base models for further finetuning, we evaluated this by finetuning on 36 datasets. The model ranks 3rd among all tested models for the bert-base-uncased architecture as of 21/12/2022.


To share this information with others in your model card, please add the following evaluation results to your README.md page.

For more information please see https://ibm.github.io/model-recycling/ or contact me.

Best regards,
Elad Venezian
eladv@il.ibm.com
IBM Research AI

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -69,3 +69,17 @@ The following hyperparameters were used during training:
69
  - Pytorch 1.11.0+cu113
70
  - Datasets 1.16.1
71
  - Tokenizers 0.10.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  - Pytorch 1.11.0+cu113
70
  - Datasets 1.16.1
71
  - Tokenizers 0.10.1
72
+
73
+ ## Model Recycling
74
+
75
+ [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=1.50&mnli_lp=nan&20_newsgroup=1.15&ag_news=0.14&amazon_reviews_multi=-0.06&anli=0.80&boolq=2.51&cb=7.05&cola=0.82&copa=9.55&dbpedia=0.44&esnli=0.64&financial_phrasebank=10.97&imdb=-0.14&isear=-0.04&mnli=-0.16&mrpc=1.35&multirc=1.23&poem_sentiment=0.63&qnli=0.53&qqp=-0.54&rotten_tomatoes=0.42&rte=4.64&sst2=0.00&sst_5bins=0.01&stsb=0.57&trec_coarse=0.54&trec_fine=8.67&tweet_ev_emoji=0.40&tweet_ev_emotion=0.45&tweet_ev_hate=0.18&tweet_ev_irony=-0.80&tweet_ev_offensive=-0.25&tweet_ev_sentiment=0.54&wic=-0.87&wnli=1.55&wsc=1.35&yahoo_answers=-0.32&model_name=jmassot%2Fbert-base-uncased-issues-128&base_name=bert-base-uncased) using jmassot/bert-base-uncased-issues-128 as a base model yields average score of 73.70 in comparison to 72.20 by bert-base-uncased.
76
+
77
+ The model is ranked 3rd among all tested models for the bert-base-uncased architecture as of 21/12/2022
78
+ Results:
79
+
80
+ | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
81
+ |---------------:|----------:|-----------------------:|-------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:|
82
+ | 84.2007 | 89.7333 | 65.86 | 47.75 | 71.4679 | 71.4286 | 82.6462 | 59 | 78.6 | 90.34 | 79.5 | 91.432 | 69.0352 | 83.5639 | 83.3333 | 61.2005 | 67.3077 | 90.4082 | 89.7353 | 85.272 | 64.6209 | 91.9725 | 52.8054 | 86.4351 | 96.6 | 77 | 36.41 | 80.3659 | 53.0303 | 66.9643 | 85.1163 | 70.0179 | 62.3824 | 52.1127 | 63.4615 | 72 |
83
+
84
+
85
+ For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)