potsawee commited on
Commit
0ec8ca0
1 Parent(s): 719463d

Update README.md

Browse files

Fix the mistake in context + question in the "prepare_answering_input" function

Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -51,12 +51,12 @@ The sliding Foxes, who ended with 10 men following Wout Faes' late dismissal for
51
  >>> selected_answer = options[np.argmax(prob)]
52
 
53
  >>> print(prob)
54
- [0.02417609840631485, 0.04619544371962547, 0.7678786516189575, 0.16174985468387604]
55
  >>> print(selected_answer)
56
  Joao Felix
57
  ```
58
 
59
- where the function the prepare the input to the answering model is:
60
 
61
  ```python
62
  def prepare_answering_input(
@@ -66,7 +66,7 @@ def prepare_answering_input(
66
  context, # str
67
  max_seq_length=4096,
68
  ):
69
- c_plus_q = question + ' ' + tokenizer.bos_token + ' ' + context
70
  c_plus_q_4 = [c_plus_q] * len(options)
71
  tokenized_examples = tokenizer(
72
  c_plus_q_4, options,
@@ -84,7 +84,6 @@ def prepare_answering_input(
84
  return example_encoded
85
  ```
86
 
87
-
88
  ## Related Models
89
  - Question/Answering Generation ```Context ---> Question + Answer```:
90
  - https://huggingface.co/potsawee/t5-large-generation-race-QuestionAnswer
51
  >>> selected_answer = options[np.argmax(prob)]
52
 
53
  >>> print(prob)
54
+ [0.00145158, 0.00460851, 0.99049687, 0.00344302]
55
  >>> print(selected_answer)
56
  Joao Felix
57
  ```
58
 
59
+ where the function that prepare the input to the answering model is:
60
 
61
  ```python
62
  def prepare_answering_input(
66
  context, # str
67
  max_seq_length=4096,
68
  ):
69
+ c_plus_q = context + ' ' + tokenizer.bos_token + ' ' + question
70
  c_plus_q_4 = [c_plus_q] * len(options)
71
  tokenized_examples = tokenizer(
72
  c_plus_q_4, options,
84
  return example_encoded
85
  ```
86
 
 
87
  ## Related Models
88
  - Question/Answering Generation ```Context ---> Question + Answer```:
89
  - https://huggingface.co/potsawee/t5-large-generation-race-QuestionAnswer