sileod commited on
Commit
689a7bb
1 Parent(s): 5550242

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -157,11 +157,11 @@ library_name: transformers
157
  # Model Card for DeBERTa-v3-base-tasksource-nli
158
 
159
  DeBERTa-v3-base fine-tuned with multi-task learning on 444 tasks of the [tasksource collection](https://github.com/sileod/tasksource/)
160
- You can fine-tune this model to use it for any classification or multiple-choice task.
161
  This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI).
162
  The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training.
163
 
164
- This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic/hh-rlhf... alongside many NLI and classification tasks with a SequenceClassification heads while using only one shared encoder.
165
  Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched.
166
  The number of examples per task was capped to 64k. The model was trained for 20k steps with a batch size of 384, and a peak learning rate of 2e-5.
167
 
 
157
  # Model Card for DeBERTa-v3-base-tasksource-nli
158
 
159
  DeBERTa-v3-base fine-tuned with multi-task learning on 444 tasks of the [tasksource collection](https://github.com/sileod/tasksource/)
160
+ You can further fine-tune this model to use it for any classification or multiple-choice task.
161
  This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI).
162
  The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training.
163
 
164
+ This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic/hh-rlhf, anli... alongside many NLI and classification tasks with a SequenceClassification heads while using only one shared encoder.
165
  Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched.
166
  The number of examples per task was capped to 64k. The model was trained for 20k steps with a batch size of 384, and a peak learning rate of 2e-5.
167