sjrhuschlee commited on
Commit
a0b2524
1 Parent(s): 635b8b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -142,9 +142,11 @@ model-index:
142
 
143
  This is the [flan-t5-large](https://huggingface.co/google/flan-t5-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
144
 
 
 
145
  This model was trained using LoRA available through the [PEFT library](https://github.com/huggingface/peft).
146
 
147
- NOTE: The `<cls>` token must be manually added to the beginning of the question for this model to work properly. It uses the `<cls>` token to be able to make "no answer" predictions. The t5 tokenizer does not automatically add this special token which is why it is added manually.
148
 
149
  ## Overview
150
  **Language model:** flan-t5-large
@@ -172,7 +174,7 @@ nlp = pipeline(
172
  'question-answering',
173
  model=model_name,
174
  tokenizer=model_name,
175
- trust_remote_code=True,
176
  )
177
  qa_input = {
178
  'question': f'{nlp.tokenizer.cls_token}Where do I live?', # '<cls>Where do I live?'
@@ -183,7 +185,8 @@ res = nlp(qa_input)
183
 
184
  # b) Load model & tokenizer
185
  model = AutoModelForQuestionAnswering.from_pretrained(
186
- model_name, trust_remote_code=True
 
187
  )
188
  tokenizer = AutoTokenizer.from_pretrained(model_name)
189
 
 
142
 
143
  This is the [flan-t5-large](https://huggingface.co/google/flan-t5-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
144
 
145
+ **UPDATE:** With transformers version 4.31.0 the `use_remote_code=True` is no longer necessary and if used will cause `AutoModelForQuestionAnswering.from_pretrained()` to not work properly.
146
+
147
  This model was trained using LoRA available through the [PEFT library](https://github.com/huggingface/peft).
148
 
149
+ **NOTE:** The `<cls>` token must be manually added to the beginning of the question for this model to work properly. It uses the `<cls>` token to be able to make "no answer" predictions. The t5 tokenizer does not automatically add this special token which is why it is added manually.
150
 
151
  ## Overview
152
  **Language model:** flan-t5-large
 
174
  'question-answering',
175
  model=model_name,
176
  tokenizer=model_name,
177
+ # trust_remote_code=True, # Do not use if version transformers>=4.31.0
178
  )
179
  qa_input = {
180
  'question': f'{nlp.tokenizer.cls_token}Where do I live?', # '<cls>Where do I live?'
 
185
 
186
  # b) Load model & tokenizer
187
  model = AutoModelForQuestionAnswering.from_pretrained(
188
+ model_name,
189
+ # trust_remote_code=True # trust_remote_code=True # Do not use if version transformers>=4.31.0
190
  )
191
  tokenizer = AutoTokenizer.from_pretrained(model_name)
192