nc33 eladven commited on
Commit
5f962ac
1 Parent(s): 88efd0a

Evaluation results for nc33/deberta_finetune model as a base model for other tasks (#1)

Browse files

- Evaluation results for nc33/deberta_finetune model as a base model for other tasks (bc87cdb1c4a57278e9e6d423a8c1875e8c8ae896)


Co-authored-by: Elad Venezian <eladven@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -53,3 +53,17 @@ The following hyperparameters were used during training:
53
  - Pytorch 1.13.0+cu116
54
  - Datasets 2.8.0
55
  - Tokenizers 0.13.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  - Pytorch 1.13.0+cu116
54
  - Datasets 2.8.0
55
  - Tokenizers 0.13.2
56
+
57
+ ## Model Recycling
58
+
59
+ [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=0.47&mnli_lp=nan&20_newsgroup=-0.22&ag_news=-0.08&amazon_reviews_multi=0.62&anli=-0.22&boolq=1.36&cb=-1.79&cola=0.01&copa=9.60&dbpedia=0.23&esnli=-0.35&financial_phrasebank=4.11&imdb=-0.02&isear=0.37&mnli=-0.15&mrpc=0.99&multirc=1.27&poem_sentiment=0.77&qnli=0.05&qqp=-0.12&rotten_tomatoes=-0.18&rte=0.69&sst2=0.12&sst_5bins=1.39&stsb=0.13&trec_coarse=-0.56&trec_fine=-0.22&tweet_ev_emoji=0.93&tweet_ev_emotion=1.13&tweet_ev_hate=3.18&tweet_ev_irony=-0.74&tweet_ev_offensive=-1.34&tweet_ev_sentiment=-1.61&wic=-0.53&wnli=-2.61&wsc=0.34&yahoo_answers=0.30&model_name=nc33%2Fdeberta_finetune&base_name=microsoft%2Fdeberta-v3-base) using nc33/deberta_finetune as a base model yields average score of 79.51 in comparison to 79.04 by microsoft/deberta-v3-base.
60
+
61
+ The model is ranked 3rd among all tested models for the microsoft/deberta-v3-base architecture as of 06/02/2023
62
+ Results:
63
+
64
+ | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
65
+ |---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|-------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:|
66
+ | 86.1922 | 90.3667 | 67.48 | 58.5625 | 84.3425 | 73.2143 | 86.5772 | 68 | 79.6667 | 91.5717 | 88.6 | 94.472 | 72.2295 | 89.6359 | 90.1961 | 63.5314 | 87.5 | 93.5567 | 91.672 | 90.2439 | 83.0325 | 95.1835 | 58.371 | 90.4054 | 97.2 | 90.8 | 47.122 | 85.0809 | 59.3939 | 79.0816 | 83.7209 | 70.197 | 70.6897 | 67.6056 | 64.4231 | 72.3333 |
67
+
68
+
69
+ For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)