Zero-Shot Classification
Transformers
PyTorch
Safetensors
English
deberta-v2
text-classification
deberta-v3-base
deberta-v3
deberta
nli
natural-language-inference
multitask
multi-task
pipeline
extreme-multi-task
extreme-mtl
tasksource
zero-shot
rlhf
Eval Results
Inference Endpoints
sileod commited on
Commit
7b9bb6f
1 Parent(s): e624a36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -271,8 +271,8 @@ pipeline_tag: zero-shot-classification
271
 
272
  This is [DeBERTa-v3-base](https://hf.co/microsoft/deberta-v3-base) fine-tuned with multi-task learning on 560 tasks of the [tasksource collection](https://github.com/sileod/tasksource/).
273
  This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
274
- - Natural language inference, and many other tasks with tasksource-adapters, see [TA]
275
  - Zero-shot entailment-based classification pipeline (similar to bart-mnli), see [ZS].
 
276
  - Further fine-tuning with a new task (classification, token classification or multiple-choice).
277
 
278
  # [ZS] Zero-shot classification pipeline
 
271
 
272
  This is [DeBERTa-v3-base](https://hf.co/microsoft/deberta-v3-base) fine-tuned with multi-task learning on 560 tasks of the [tasksource collection](https://github.com/sileod/tasksource/).
273
  This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
 
274
  - Zero-shot entailment-based classification pipeline (similar to bart-mnli), see [ZS].
275
+ - Natural language inference, and many other tasks with tasksource-adapters, see [TA]
276
  - Further fine-tuning with a new task (classification, token classification or multiple-choice).
277
 
278
  # [ZS] Zero-shot classification pipeline