nfliu commited on
Commit
9f94e92
1 Parent(s): c5e08b8

Add inference example

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -33,6 +33,38 @@ It achieves the following results on the evaluation set:
33
  - Loss: 0.5417
34
  - Accuracy: 0.7379
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ## Model description
37
 
38
  More information needed
 
33
  - Loss: 0.5417
34
  - Accuracy: 0.7379
35
 
36
+ ## Inference Example
37
+
38
+ ```
39
+ import torch
40
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
41
+
42
+ model = AutoModelForSequenceClassification.from_pretrained("nfliu/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large_boolq")
43
+ tokenizer = AutoTokenizer.from_pretrained("nfliu/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large_boolq")
44
+
45
+ # Each example is a (question, context) pair.
46
+ examples = [
47
+ ("Lake Tahoe is in California", "Lake Tahoe is a popular tourist spot in California."),
48
+ ("Water is wet", "Contrary to popular belief, water is not wet.")
49
+ ]
50
+
51
+ encoded_input = tokenizer(examples, padding=True, truncation=True, return_tensors="pt")
52
+
53
+ with torch.no_grad():
54
+ model_output = model(**encoded_input)
55
+ probabilities = torch.softmax(model_output.logits, dim=-1).cpu().tolist()
56
+
57
+ probability_no = [round(prob[0], 2) for prob in probabilities]
58
+ probability_yes = [round(prob[1], 2) for prob in probabilities]
59
+
60
+ for example, p_no, p_yes in zip(examples, probability_no, probability_yes):
61
+ print(f"Question: {example[0]}")
62
+ print(f"Context: {example[1]}")
63
+ print(f"p(No | question, context): {p_no}")
64
+ print(f"p(Yes | question, context): {p_yes}")
65
+ print()
66
+ ```
67
+
68
  ## Model description
69
 
70
  More information needed