Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model Card for DeBERTa-v3-base-tasksource-nli

This is DeBERTa-v3-base fine-tuned with multi-task learning on 600 tasks. This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:

  • Zero-shot entailment-based classification pipeline (similar to bart-mnli), see [ZS].
  • Natural language inference, and many other tasks with tasksource-adapters, see [TA]
  • Further fine-tuning with a new task (classification, token classification or multiple-choice).

[ZS] Zero-shot classification pipeline

from transformers import pipeline
classifier = pipeline("zero-shot-classification",model="Azma-AI/deberta-base-multi-label-classifier")

text = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(text, candidate_labels)
Downloads last month
8
Safetensors
Model size
184M params
Tensor type
I64
·
F32
·