eladven commited on
Commit
ab2324a
1 Parent(s): 94bbbef

Evaluation results for ibm/ColD-Fusion-itr13-seed2 model as a base model for other tasks (#7)

Browse files

- Evaluation results for ibm/ColD-Fusion-itr13-seed2 model as a base model for other tasks (0390a167f77db5d575e3e65e575cb1bf0eaf949f)

Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -53,7 +53,7 @@ output = model(encoded_input)
53
  ## Evaluation results
54
  ## Model Recycling
55
 
56
- [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=2.50&mnli_lp=nan&20_newsgroup=1.08&ag_news=-0.47&amazon_reviews_multi=0.14&anli=2.75&boolq=3.32&cb=21.52&cola=0.07&copa=24.30&dbpedia=0.17&esnli=0.05&financial_phrasebank=2.19&imdb=-0.03&isear=0.67&mnli=0.41&mrpc=-0.12&multirc=2.46&poem_sentiment=4.52&qnli=0.27&qqp=0.37&rotten_tomatoes=3.04&rte=10.99&sst2=1.18&sst_5bins=1.47&stsb=1.72&trec_coarse=-0.11&trec_fine=3.24&tweet_ev_emoji=-1.35&tweet_ev_emotion=1.22&tweet_ev_hate=-0.34&tweet_ev_irony=5.48&tweet_ev_offensive=1.49&tweet_ev_sentiment=-1.25&wic=4.58&wnli=-5.49&wsc=0.19&yahoo_answers=0.16&model_name=ibm%2FColD-Fusion-itr13-seed2&base_name=roberta-base) using ibm/ColD-Fusion-itr13-seed2 as a base model. yields average score of 78.72 in comparison to 76.22 by roberta-base.
57
 
58
  The model ranks 1st among all tested models for the roberta-base architecture as of 13/12/2022
59
  Results:
@@ -64,8 +64,6 @@ Results:
64
 
65
 
66
  For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
67
-
68
- ```bibtex
69
  @article{ColDFusion,
70
  author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
71
  title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
 
53
  ## Evaluation results
54
  ## Model Recycling
55
 
56
+ [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=2.50&mnli_lp=nan&20_newsgroup=1.08&ag_news=-0.47&amazon_reviews_multi=0.14&anli=2.75&boolq=3.32&cb=21.52&cola=0.07&copa=24.30&dbpedia=0.17&esnli=0.05&financial_phrasebank=2.19&imdb=-0.03&isear=0.67&mnli=0.41&mrpc=-0.12&multirc=2.46&poem_sentiment=4.52&qnli=0.27&qqp=0.37&rotten_tomatoes=3.04&rte=10.99&sst2=1.18&sst_5bins=1.47&stsb=1.72&trec_coarse=-0.11&trec_fine=3.24&tweet_ev_emoji=-1.35&tweet_ev_emotion=1.22&tweet_ev_hate=-0.34&tweet_ev_irony=5.48&tweet_ev_offensive=1.49&tweet_ev_sentiment=-1.25&wic=4.58&wnli=-5.49&wsc=0.19&yahoo_answers=0.16&model_name=ibm%2FColD-Fusion-itr13-seed2&base_name=roberta-base) using ibm/ColD-Fusion-itr13-seed2 as a base model yields average score of 78.72 in comparison to 76.22 by roberta-base.
57
 
58
  The model ranks 1st among all tested models for the roberta-base architecture as of 13/12/2022
59
  Results:
 
64
 
65
 
66
  For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
 
 
67
  @article{ColDFusion,
68
  author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
69
  title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},