Zero-Shot Classification
Transformers
PyTorch
Safetensors
English
deberta-v2
text-classification
deberta-v3-large
nli
natural-language-inference
multitask
multi-task
pipeline
extreme-multi-task
extreme-mtl
tasksource
zero-shot
rlhf
Inference Endpoints

Minimal reproducible inference code please

#2
by skoll520 - opened

I tried in your API the same I tried using transformers pipeline, but I'm getting different results, can you post the API code?
I found this AI very useful, thank you.

Hi, thank you !
The base model is better documented.
https://huggingface.co/sileod/deberta-v3-base-tasksource-nli
In what specific task and use case do you see differences ? It could be that transformes pipeline don't use a task embedding, while tasknet pipeline does. However, it should also work without task embedding, the performance might just be slightly lower at some zero shot tasks.
If you don't find the answer you look for, don't hesitate

Sign up or log in to comment