nfliu commited on
Commit
efc939f
1 Parent(s): dfd0426

Add usage example

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -35,6 +35,38 @@ It achieves the following results on the evaluation set:
35
  - Loss: 0.6057
36
  - Accuracy: 0.8569
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ## Model description
39
 
40
  More information needed
 
35
  - Loss: 0.6057
36
  - Accuracy: 0.8569
37
 
38
+ ## Example
39
+
40
+ ```
41
+ import torch
42
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
43
+
44
+ model = AutoModelForSequenceClassification.from_pretrained("nfliu/roberta-large_boolq")
45
+ tokenizer = AutoTokenizer.from_pretrained("nfliu/roberta-large_boolq")
46
+
47
+ # Each example is a (question, context) pair.
48
+ examples = [
49
+ ("Lake Tahoe is in California", "Lake Tahoe is a popular tourist spot in California."),
50
+ ("Water is wet", "Contrary to popular belief, water is not wet.")
51
+ ]
52
+
53
+ encoded_input = tokenizer(examples, padding=True, truncation=True, return_tensors="pt")
54
+
55
+ with torch.no_grad():
56
+ model_output = model(**encoded_input)
57
+ probabilities = torch.softmax(model_output.logits, dim=-1).cpu().tolist()
58
+
59
+ probability_no = [round(prob[0], 2) for prob in probabilities]
60
+ probability_yes = [round(prob[1], 2) for prob in probabilities]
61
+
62
+ for example, p_no, p_yes in zip(examples, probability_no, probability_yes):
63
+ print(f"Question: {example[0]}")
64
+ print(f"Context: {example[1]}")
65
+ print(f"p(No | question, context): {p_no}")
66
+ print(f"p(Yes | question, context): {p_yes}")
67
+ print()
68
+ ```
69
+
70
  ## Model description
71
 
72
  More information needed