metadata
language:
- en
license: mit
tags:
- NLI
- deberta-v3
datasets:
- mnli
- facebook/anli
- fever
- wanli
- ling
- amazonpolarity
- imdb
- appreviews
inference: false
pipeline_tag: zero-shot-classification
base_model: MoritzLaurer/deberta-v3-large-zeroshot-v1
ONNX version of MoritzLaurer/deberta-v3-large-zeroshot-v1
This model is a conversion of MoritzLaurer/deberta-v3-large-zeroshot-v1 to ONNX format using the 🤗 Optimum library.
MoritzLaurer/deberta-v3-large-zeroshot-v1
is designed for zero-shot classification, capable of determining whether a hypothesis is true
or not_true
based on a text, a format based on Natural Language Inference (NLI).
Usage
Loading the model requires the 🤗 Optimum library installed.
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("laiyer/deberta-v3-large-zeroshot-v1-onnx")
tokenizer.model_input_names = ["input_ids", "attention_mask"]
model = ORTModelForSequenceClassification.from_pretrained("laiyer/deberta-v3-large-zeroshot-v1-onnx")
classifier = pipeline(
task="zero-shot-classification",
model=model,
tokenizer=tokenizer,
)
classifier_output = classifier("Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.", ["mobile", "website", "billing", "account access"])
print(classifier_output)
LLM Guard
Community
Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, or engage in discussions about LLM security!