xioaditya commited on
Commit
4346c83
1 Parent(s): 1c2a2a2

Update README.md (#2)

Browse files

- Update README.md (9e702316360fbb3d3202de10b7b09fa7a6f6c989)

Files changed (1) hide show
  1. README.md +28 -2
README.md CHANGED
@@ -58,18 +58,44 @@ prompt += '''Instruction:\tYou are to try to answer the following question using
58
  Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property.
59
  Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string.
60
  Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property.
61
- ''' + info + qs + " Response:"
62
  ```
63
 
64
  Inference:
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ```python
67
  inputs = tokenizer(prompt, return_tensors="pt").to(device)
68
  generate_ids = model.generate(
69
  **inputs,
70
  max_new_tokens=1024,
71
  temperature=0.0,
72
- num_beams=2
 
73
  )
74
  response = tokenizer.decode(generate_ids[0],
75
  skip_special_tokens=True,
 
58
  Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property.
59
  Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string.
60
  Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property.
61
+ ''' + info + qs
62
  ```
63
 
64
  Inference:
65
 
66
+ We recommend using newline character for stopping criterion, as follows:
67
+
68
+ ```python
69
+ from transformers import StoppingCriteria, StoppingCriteriaList
70
+
71
+ eos_tokens = [tokenizer.eos_token,'\n']
72
+ eos_token_ids = [tokenizer.encode(token)[0] for token in eos_tokens]
73
+
74
+ class MultipleEOSTokensStoppingCriteria(StoppingCriteria):
75
+ def __init__(self, eos_token_ids):
76
+ self.eos_token_ids = set(eos_token_ids)
77
+ def __call__(self, input_ids, scores) -> bool:
78
+ if input_ids.shape[-1] <= 1:
79
+ return False
80
+ for eos_token_id in self.eos_token_ids:
81
+ if eos_token_id == input_ids[0, -1].item():
82
+ return True
83
+ return False
84
+
85
+ # Define stopping criteria
86
+ multiple_eos_tokens_processor = MultipleEOSTokensStoppingCriteria(eos_token_ids)
87
+ stopping_criteria = StoppingCriteriaList([multiple_eos_tokens_processor])
88
+ ```
89
+ It can be used in inference as follows:
90
+
91
  ```python
92
  inputs = tokenizer(prompt, return_tensors="pt").to(device)
93
  generate_ids = model.generate(
94
  **inputs,
95
  max_new_tokens=1024,
96
  temperature=0.0,
97
+ num_beams=2,
98
+ stopping_criteria=stopping_criteria
99
  )
100
  response = tokenizer.decode(generate_ids[0],
101
  skip_special_tokens=True,