MoritzLaurer HF staff commited on
Commit
e239423
1 Parent(s): 8919361

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  language:
3
  - en
 
4
  tags:
5
  - text-classification
6
  - zero-shot-classification
@@ -22,11 +23,14 @@ Note that the model was trained on binary NLI to predict either "entailment" or
22
 
23
  The base model is [DeBERTa-v3-xsmall from Microsoft](https://huggingface.co/microsoft/deberta-v3-xsmall). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
24
 
 
 
25
  ## Intended uses & limitations
26
  #### How to use the model
27
  ```python
28
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
29
  import torch
 
30
 
31
  model_name = "MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary"
32
  tokenizer = AutoTokenizer.from_pretrained(model_name)
@@ -71,11 +75,11 @@ speed (text/sec, GPU Tesla P100, 128 batch) | 473 | 487 | 230 | 390 | 340 | 586
71
  ## Limitations and bias
72
  Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
73
 
74
- ### BibTeX entry and citation info
75
- If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
76
 
77
  ### Ideas for cooperation or questions?
78
  If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
79
 
80
  ### Debugging and issues
81
- Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
 
1
  ---
2
  language:
3
  - en
4
+ license: mit
5
  tags:
6
  - text-classification
7
  - zero-shot-classification
 
23
 
24
  The base model is [DeBERTa-v3-xsmall from Microsoft](https://huggingface.co/microsoft/deberta-v3-xsmall). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
25
 
26
+ For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli.
27
+
28
  ## Intended uses & limitations
29
  #### How to use the model
30
  ```python
31
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
32
  import torch
33
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
34
 
35
  model_name = "MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary"
36
  tokenizer = AutoTokenizer.from_pretrained(model_name)
 
75
  ## Limitations and bias
76
  Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
77
 
78
+ ## Citation
79
+ If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
80
 
81
  ### Ideas for cooperation or questions?
82
  If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
83
 
84
  ### Debugging and issues
85
+ Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.