sileod commited on
Commit
4b62f0c
1 Parent(s): 35495a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -225,15 +225,15 @@ library_name: transformers
225
  # Model Card for DeBERTa-v3-base-tasksource-nli
226
 
227
  DeBERTa-v3-base fine-tuned with multi-task learning on 520 tasks of the [tasksource collection](https://github.com/sileod/tasksource/)
 
228
  You can further fine-tune this model to use it for any classification or multiple-choice task.
229
- This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI).
230
  The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training.
231
 
232
- This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic rlhf, anli... alongside many NLI and classification tasks with a SequenceClassification heads while using only one shared encoder.
233
  Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched.
234
  The number of examples per task was capped to 64k. The model was trained for 45k steps with a batch size of 384, and a peak learning rate of 2e-5.
235
 
236
- The list of tasks is available in tasks.md
237
 
238
  tasksource training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing
239
 
 
225
  # Model Card for DeBERTa-v3-base-tasksource-nli
226
 
227
  DeBERTa-v3-base fine-tuned with multi-task learning on 520 tasks of the [tasksource collection](https://github.com/sileod/tasksource/)
228
+ This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for zero-shot NLI pipeline.
229
  You can further fine-tune this model to use it for any classification or multiple-choice task.
 
230
  The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training.
231
 
232
+ This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic rlhf, anli... alongside many NLI and classification tasks with one shared encoder.
233
  Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched.
234
  The number of examples per task was capped to 64k. The model was trained for 45k steps with a batch size of 384, and a peak learning rate of 2e-5.
235
 
236
+ The list of tasks is available in model config.
237
 
238
  tasksource training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing
239