MoritzLaurer HF staff commited on
Commit
c7f4bb5
1 Parent(s): 18e5086

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - text-classification
6
+ - zero-shot-classification
7
+ metrics:
8
+ - accuracy
9
+ widget:
10
+ - text: "I first thought that I liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was good."
11
+ ---
12
+ # DeBERTa-v3-base-mnli-fever-anli
13
+ ## Model description
14
+ This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.
15
+ The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For a more powerful model, check out [DeBERTa-v3-base-mnli-fever-anli (https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) which was trained on even more data.
16
+ ## Intended uses & limitations
17
+ #### How to use the model
18
+ ```python
19
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
20
+ import torch
21
+ model_name = "MoritzLaurer/DeBERTa-v3-base-mnli"
22
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
23
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
24
+ premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
25
+ hypothesis = "The movie was good."
26
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
27
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
28
+ prediction = torch.softmax(output["logits"][0], -1).tolist()
29
+ label_names = ["entailment", "neutral", "contradiction"]
30
+ prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
31
+ print(prediction)
32
+ ```
33
+ ### Training data
34
+ This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.
35
+
36
+ ### Training procedure
37
+ DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters.
38
+ ```
39
+ training_args = TrainingArguments(
40
+ num_train_epochs=3, # total number of training epochs
41
+ learning_rate=2e-05,
42
+ per_device_train_batch_size=32, # batch size per device during training
43
+ per_device_eval_batch_size=32, # batch size for evaluation
44
+ warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
45
+ weight_decay=0.06, # strength of weight decay
46
+ fp16=True # mixed precision training
47
+ )
48
+ ```
49
+ ### Eval results
50
+ The model was evaluated using the matched test set. The metric used is accuracy.
51
+ mnli-m |
52
+ ---------|
53
+ 0.90
54
+ ## Limitations and bias
55
+ Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
56
+ ### BibTeX entry and citation info
57
+ If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.