MoritzLaurer HF staff commited on
Commit
30dc194
1 Parent(s): c80c960

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - text-classification
6
+ - zero-shot-classification
7
+ pipeline_tag: zero-shot-classification
8
+ library_name: transformers
9
+ ---
10
+ # deberta-v3-base-zeroshot-v1
11
+
12
+ ## Model description
13
+ The model is designed for zero-shot classification with the Hugging Face pipeline.
14
+ The model should be substantially better at zero-shot classification than my other zero-shot models on the
15
+ Hugging Face hub: https://huggingface.co/MoritzLaurer.
16
+
17
+ The model can do one universal task: determine whether a hypothesis is `true` or `not_true` given a text.
18
+ This task format is based on the Natural Language Inference task (NLI).
19
+ This task is so universal that any classification task can be reformulated into this true vs. false task.
20
+
21
+ The model was trained on a mixture of 27 tasks and 310 classes that have been reformatted into this universal format.
22
+ 1. 26 classification tasks with ~400k texts:
23
+ ['amazonpolarity', 'imdb', 'appreviews', 'yelpreviews', 'rottentomatoes',
24
+ 'emotiondair', 'emocontext', 'empathetic',
25
+ 'financialphrasebank', 'banking77', 'massive',
26
+ 'wikitoxic_toxicaggregated', 'wikitoxic_obscene', 'wikitoxic_threat', 'wikitoxic_insult', 'wikitoxic_identityhate',
27
+ 'hateoffensive', 'hatexplain', 'biasframes_offensive', 'biasframes_sex', 'biasframes_intent',
28
+ 'agnews', 'yahootopics',
29
+ 'trueteacher', 'spam', 'wellformedquery',]
30
+ 2. Five NLI datasets with ~885k texts: ["mnli", "anli", "fever", "wanli", "ling"]
31
+
32
+ Note that compared to other NLI models, this model predicts two classes (true vs. not_true) as opposed to three classes (entailment/neutral/contradiction)
33
+
34
+
35
+ ### How to use the model
36
+ #### Simple zero-shot classification pipeline
37
+ ```python
38
+ from transformers import pipeline
39
+ classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-base-zeroshot-v1")
40
+ sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU"
41
+ candidate_labels = ["politics", "economy", "entertainment", "environment"]
42
+ output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
43
+ print(output)
44
+ ```
45
+
46
+ ### Details on data and training
47
+
48
+ The code for preparing the data and training & evaluating the model is fully open-source here: https://github.com/MoritzLaurer/zeroshot-classifier/tree/main
49
+
50
+ ## Limitations and bias
51
+ The model can only do text classification tasks.
52
+
53
+ Please consult the original DeBERTa paper and the papers for the different datasets for potential biases.
54
+
55
+ ## Citation
56
+ If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
57
+
58
+ ### Ideas for cooperation or questions?
59
+ If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
60
+
61
+ ### Debugging and issues
62
+ Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.