MoritzLaurer HF staff commited on
Commit
6e0917f
1 Parent(s): 08e39ed

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - text-classification
6
+ - zero-shot-classification
7
+ metrics:
8
+ - accuracy
9
+ widget:
10
+ - text: "I liked the movie. [SEP] The movie was good."
11
+
12
+ ---
13
+ # MiniLM-L6-mnli
14
+ ## Model description
15
+ This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset.
16
+ The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.
17
+
18
+ ## Intended uses & limitations
19
+ #### How to use the model
20
+ ```python
21
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
22
+ import torch
23
+
24
+ model_name = "MoritzLaurer/MiniLM-L6-mnli"
25
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
26
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
27
+
28
+ premise = "I liked the movie"
29
+ hypothesis = "The movie was good."
30
+
31
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
32
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
33
+ prediction = torch.softmax(output["logits"][0], -1).tolist()
34
+ label_names = ["entailment", "neutral", "contradiction"]
35
+ prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
36
+ print(prediction)
37
+ ```
38
+ ### Training data
39
+ [MultiNLI](https://huggingface.co/datasets/multi_nli).
40
+
41
+ ### Training procedure
42
+ MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.
43
+ ```
44
+ training_args = TrainingArguments(
45
+ num_train_epochs=5, # total number of training epochs
46
+ learning_rate=2e-05,
47
+ per_device_train_batch_size=32, # batch size per device during training
48
+ per_device_eval_batch_size=32, # batch size for evaluation
49
+ warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
50
+ weight_decay=0.06, # strength of weight decay
51
+ fp16=True # mixed precision training
52
+ )
53
+ ```
54
+ ### Eval results
55
+ The model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814
56
+
57
+ ## Limitations and bias
58
+ Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
59
+
60
+ ### BibTeX entry and citation info
61
+ If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.