sileod commited on
Commit
0e67d52
1 Parent(s): 4b62f0c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -225,7 +225,7 @@ library_name: transformers
225
  # Model Card for DeBERTa-v3-base-tasksource-nli
226
 
227
  DeBERTa-v3-base fine-tuned with multi-task learning on 520 tasks of the [tasksource collection](https://github.com/sileod/tasksource/)
228
- This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for zero-shot NLI pipeline.
229
  You can further fine-tune this model to use it for any classification or multiple-choice task.
230
  The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training.
231
 
 
225
  # Model Card for DeBERTa-v3-base-tasksource-nli
226
 
227
  DeBERTa-v3-base fine-tuned with multi-task learning on 520 tasks of the [tasksource collection](https://github.com/sileod/tasksource/)
228
+ This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for zero-shot NLI pipeline (similar to bart-mnli but better).
229
  You can further fine-tune this model to use it for any classification or multiple-choice task.
230
  The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training.
231