Zero-Shot Classification
Transformers
PyTorch
Safetensors
English
deberta-v2
text-classification
deberta-v3-base
deberta-v3
deberta
nli
natural-language-inference
multitask
multi-task
pipeline
extreme-multi-task
extreme-mtl
tasksource
zero-shot
rlhf
Eval Results
Inference Endpoints
sileod's picture
Update README.md
d83dede
metadata
license: apache-2.0
language: en
tags:
  - deberta-v3-base
  - deberta-v3
  - deberta
  - text-classification
  - nli
  - natural-language-inference
  - multitask
  - multi-task
  - pipeline
  - extreme-multi-task
  - extreme-mtl
  - tasksource
  - zero-shot
  - rlhf
model-index:
  - name: deberta-v3-base-tasksource-nli
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: glue
          type: glue
          config: rte
          split: validation
        metrics:
          - type: accuracy
            value: 0.89
pipeline_tag: zero-shot-classification
datasets:
  - glue
  - super_glue
  - anli
  - metaeval/babi_nli
  - sick
  - snli
  - scitail
  - OpenAssistant/oasst1
  - universal_dependencies
  - hans
  - qbao775/PARARULE-Plus
  - alisawuffles/WANLI
  - metaeval/recast
  - sileod/probability_words_nli
  - joey234/nan-nli
  - pietrolesci/nli_fever
  - pietrolesci/breaking_nli
  - pietrolesci/conj_nli
  - pietrolesci/fracas
  - pietrolesci/dialogue_nli
  - pietrolesci/mpe
  - pietrolesci/dnc
  - pietrolesci/gpt3_nli
  - pietrolesci/recast_white
  - pietrolesci/joci
  - martn-nguyen/contrast_nli
  - pietrolesci/robust_nli
  - pietrolesci/robust_nli_is_sd
  - pietrolesci/robust_nli_li_ts
  - pietrolesci/gen_debiased_nli
  - pietrolesci/add_one_rte
  - metaeval/imppres
  - pietrolesci/glue_diagnostics
  - hlgd
  - paws
  - quora
  - medical_questions_pairs
  - conll2003
  - Anthropic/hh-rlhf
  - Anthropic/model-written-evals
  - truthful_qa
  - nightingal3/fig-qa
  - tasksource/bigbench
  - blimp
  - cos_e
  - cosmos_qa
  - dream
  - openbookqa
  - qasc
  - quartz
  - quail
  - head_qa
  - sciq
  - social_i_qa
  - wiki_hop
  - wiqa
  - piqa
  - hellaswag
  - pkavumba/balanced-copa
  - 12ml/e-CARE
  - art
  - tasksource/mmlu
  - winogrande
  - codah
  - ai2_arc
  - definite_pronoun_resolution
  - swag
  - math_qa
  - metaeval/utilitarianism
  - mteb/amazon_counterfactual
  - SetFit/insincere-questions
  - SetFit/toxic_conversations
  - turingbench/TuringBench
  - trec
  - tals/vitaminc
  - hope_edi
  - strombergnlp/rumoureval_2019
  - ethos
  - tweet_eval
  - discovery
  - pragmeval
  - silicone
  - lex_glue
  - papluca/language-identification
  - imdb
  - rotten_tomatoes
  - ag_news
  - yelp_review_full
  - financial_phrasebank
  - poem_sentiment
  - dbpedia_14
  - amazon_polarity
  - app_reviews
  - hate_speech18
  - sms_spam
  - humicroedit
  - snips_built_in_intents
  - banking77
  - hate_speech_offensive
  - yahoo_answers_topics
  - pacovaldez/stackoverflow-questions
  - zapsdcn/hyperpartisan_news
  - zapsdcn/sciie
  - zapsdcn/citation_intent
  - go_emotions
  - scicite
  - liar
  - relbert/lexical_relation_classification
  - metaeval/linguisticprobing
  - metaeval/crowdflower
  - metaeval/ethics
  - emo
  - google_wellformed_query
  - tweets_hate_speech_detection
  - has_part
  - wnut_17
  - ncbi_disease
  - acronym_identification
  - jnlpba
  - species_800
  - SpeedOfMagic/ontonotes_english
  - blog_authorship_corpus
  - launch/open_question_type
  - health_fact
  - commonsense_qa
  - mc_taco
  - ade_corpus_v2
  - prajjwal1/discosense
  - circa
  - YaHi/EffectiveFeedbackStudentWriting
  - Ericwang/promptSentiment
  - Ericwang/promptNLI
  - Ericwang/promptSpoke
  - Ericwang/promptProficiency
  - Ericwang/promptGrammar
  - Ericwang/promptCoherence
  - PiC/phrase_similarity
  - copenlu/scientific-exaggeration-detection
  - quarel
  - mwong/fever-evidence-related
  - numer_sense
  - dynabench/dynasent
  - raquiba/Sarcasm_News_Headline
  - sem_eval_2010_task_8
  - demo-org/auditor_review
  - medmcqa
  - aqua_rat
  - RuyuanWan/Dynasent_Disagreement
  - RuyuanWan/Politeness_Disagreement
  - RuyuanWan/SBIC_Disagreement
  - RuyuanWan/SChem_Disagreement
  - RuyuanWan/Dilemmas_Disagreement
  - lucasmccabe/logiqa
  - wiki_qa
  - metaeval/cycic_classification
  - metaeval/cycic_multiplechoice
  - metaeval/sts-companion
  - metaeval/commonsense_qa_2.0
  - metaeval/lingnli
  - metaeval/monotonicity-entailment
  - metaeval/arct
  - metaeval/scinli
  - metaeval/naturallogic
  - onestop_qa
  - demelin/moral_stories
  - corypaik/prost
  - aps/dynahate
  - metaeval/syntactic-augmentation-nli
  - metaeval/autotnli
  - lasha-nlp/CONDAQA
  - openai/webgpt_comparisons
  - Dahoas/synthetic-instruct-gptj-pairwise
  - metaeval/scruples
  - metaeval/wouldyourather
  - sileod/attempto-nli
  - metaeval/defeasible-nli
  - metaeval/help-nli
  - metaeval/nli-veridicality-transitivity
  - metaeval/natural-language-satisfiability
  - metaeval/lonli
  - metaeval/dadc-limit-nli
  - ColumbiaNLP/FLUTE
  - metaeval/strategy-qa
  - openai/summarize_from_feedback
  - metaeval/folio
  - metaeval/tomi-nli
  - metaeval/avicenna
  - stanfordnlp/SHP
  - GBaker/MedQA-USMLE-4-options-hf
  - sileod/wikimedqa
  - declare-lab/cicero
  - amydeng2000/CREAK
  - metaeval/mutual
  - inverse-scaling/NeQA
  - inverse-scaling/quote-repetition
  - inverse-scaling/redefine-math
  - metaeval/puzzte
  - metaeval/implicatures
  - race
  - metaeval/spartqa-yn
  - metaeval/spartqa-mchoice
  - metaeval/temporal-nli
  - metaeval/ScienceQA_text_only
  - AndyChiang/cloth
  - metaeval/logiqa-2.0-nli
  - tasksource/oasst1_dense_flat
  - metaeval/boolq-natural-perturbations
  - metaeval/path-naturalness-prediction
  - riddle_sense
  - Jiangjie/ekar_english
  - metaeval/implicit-hate-stg1
  - metaeval/chaos-mnli-ambiguity
  - IlyaGusev/headline_cause
  - metaeval/race-c
  - metaeval/equate
  - metaeval/ambient
  - AndyChiang/dgen
  - metaeval/clcd-english
  - civil_comments
  - metaeval/acceptability-prediction
  - maximedb/twentyquestions
  - metaeval/counterfactually-augmented-snli
  - tasksource/I2D2
  - sileod/mindgames
  - metaeval/counterfactually-augmented-imdb
  - metaeval/cnli
  - metaeval/reclor
  - tasksource/oasst1_pairwise_rlhf_reward
metrics:
  - accuracy
library_name: transformers

Model Card for DeBERTa-v3-base-tasksource-nli

This is DeBERTa-v3-base fine-tuned with multi-task learning on 560 tasks of the tasksource collection This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for zero-shot NLI pipeline (similar to bart-mnli but better). You can also load other tasks (see next paragraph) or further fine-tune the encoder for new task (classification, token classification or multiple-choice).

Tasksource-adapters: 1 line access to 500 tasks

!pip install tasknet tasksource
import tasknet as tn
pipe = tn.load_pipeline('sileod/deberta-v3-base-tasksource-nli','glue/sst2') # works for 500+ tasksource tasks
pipe(['That movie was great !', 'Awful movie.'])
# [{'label': 'positive', 'score': 0.9956}, {'label': 'negative', 'score': 0.9967}]

The list of tasks is available in model config.json.

Evaluation

This model ranked 1st among all models with the microsoft/deberta-v3-base architecture according to the IBM model recycling evaluation. Results: Evaluation on 36 datasets using sileod/deberta-v3-base_tasksource-420 as a base model yields average score of 80.45 in comparison to 79.04 by microsoft/deberta-v3-base.

20_newsgroup ag_news amazon_reviews_multi anli boolq cb cola copa dbpedia esnli financial_phrasebank imdb isear mnli mrpc multirc poem_sentiment qnli qqp rotten_tomatoes rte sst2 sst_5bins stsb trec_coarse trec_fine tweet_ev_emoji tweet_ev_emotion tweet_ev_hate tweet_ev_irony tweet_ev_offensive tweet_ev_sentiment wic wnli wsc yahoo_answers
87.042 90.9 66.46 59.7188 85.5352 85.7143 87.0566 69 79.5333 91.6735 85.8 94.324 72.4902 90.2055 88.9706 63.9851 87.5 93.6299 91.7363 91.0882 84.4765 95.0688 56.9683 91.6654 98 91.2 46.814 84.3772 58.0471 81.25 85.2326 71.8821 69.4357 73.2394 74.0385 72.2

For more information, see: Model Recycling

Software and training details

https://github.com/sileod/tasksource/
https://github.com/sileod/tasknet/
Training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing Training took 7 days on RTX6000 24GB gpu.

This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic rlhf, anli... alongside many NLI and classification tasks with one shared encoder. Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched. The number of examples per task was capped to 64k. The model was trained for 45k steps with a batch size of 384, and a peak learning rate of 2e-5.

Citation

More details on this article:

@article{sileo2023tasksource,
  title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
  author={Sileo, Damien},
  url= {https://arxiv.org/abs/2301.05948},
  journal={arXiv preprint arXiv:2301.05948},
  year={2023}
}

Model Card Contact

damien.sileo@inria.fr