MoritzLaurer HF staff commited on
Commit
f2c795e
1 Parent(s): 267172a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -16
README.md CHANGED
@@ -7,31 +7,34 @@ tags:
7
  metrics:
8
  - accuracy
9
  widget:
10
- - text: "70-85% of the population needs to get vaccinated against the novel coronavirus to achieve herd immunity."
11
 
12
  ---
13
  # DeBERTa-v3-base-mnli-fever-anli
14
  ## Model description
15
  This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. This base model outperforms almost all large models on the [ANLI benchmark](https://github.com/facebookresearch/anli).
16
- The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf).
17
 
18
  ## Intended uses & limitations
19
  #### How to use the model
20
  ```python
21
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
22
  import torch
 
23
  model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli"
24
  tokenizer = AutoTokenizer.from_pretrained(model_name)
25
  model = AutoModelForSequenceClassification.from_pretrained(model_name)
26
- text = "The new variant first detected in southern England in September is blamed for sharp rises in levels of positive tests in recent weeks in London, south-east England and the east of England"
27
- input = tokenizer(text, truncation=True, return_tensors="pt")
28
- output = model(input["input_ids"])
 
 
 
29
  prediction = torch.softmax(output["logits"][0], -1).tolist()
30
  label_names = ["entailment", "neutral", "contradiction"]
31
  prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
32
  print(prediction)
33
  ```
34
-
35
  ### Training data
36
  DeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs.
37
 
@@ -48,7 +51,6 @@ training_args = TrainingArguments(
48
  fp16=True # mixed precision training
49
  )
50
  ```
51
-
52
  ### Eval results
53
  The model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy.
54
 
@@ -56,16 +58,8 @@ mnli-m | mnli-mm | fever-nli | anli-all | anli-r3
56
  ---------|----------|---------|----------|----------
57
  0.903 | 0.903 | 0.777 | 0.579 | 0.495
58
 
59
-
60
  ## Limitations and bias
61
  Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
62
 
63
  ### BibTeX entry and citation info
64
- ```bibtex
65
- @unpublished{
66
- title={DeBERTa-v3-base-mnli-fever-anli},
67
- author={Moritz Laurer},
68
- year={2021},
69
- note={Unpublished paper}
70
- }
71
- ```
 
7
  metrics:
8
  - accuracy
9
  widget:
10
+ - text: "I first thought that I liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was good."
11
 
12
  ---
13
  # DeBERTa-v3-base-mnli-fever-anli
14
  ## Model description
15
  This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. This base model outperforms almost all large models on the [ANLI benchmark](https://github.com/facebookresearch/anli).
16
+ The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf).
17
 
18
  ## Intended uses & limitations
19
  #### How to use the model
20
  ```python
21
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
22
  import torch
23
+
24
  model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli"
25
  tokenizer = AutoTokenizer.from_pretrained(model_name)
26
  model = AutoModelForSequenceClassification.from_pretrained(model_name)
27
+
28
+ premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
29
+ hypothesis = "The movie was good."
30
+
31
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
32
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
33
  prediction = torch.softmax(output["logits"][0], -1).tolist()
34
  label_names = ["entailment", "neutral", "contradiction"]
35
  prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
36
  print(prediction)
37
  ```
 
38
  ### Training data
39
  DeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs.
40
 
 
51
  fp16=True # mixed precision training
52
  )
53
  ```
 
54
  ### Eval results
55
  The model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy.
56
 
 
58
  ---------|----------|---------|----------|----------
59
  0.903 | 0.903 | 0.777 | 0.579 | 0.495
60
 
 
61
  ## Limitations and bias
62
  Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
63
 
64
  ### BibTeX entry and citation info
65
+ If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.