alistairewj commited on
Commit
451faa9
1 Parent(s): f18ff5e

add model_max_length

Browse files

without specifying the model_max_length for the tokenizer defaults to a very large int, and inference crashes

I believe this may be due to a recent change to transformers - bert-large-uncased config was updated 2 months ago: https://huggingface.co/google-bert/bert-large-uncased/commit/6da4b6a26a1877e173fca3225479512db81a5e5b

Files changed (1) hide show
  1. tokenizer_config.json +1 -3
tokenizer_config.json CHANGED
@@ -1,3 +1 @@
1
- {
2
- "do_lower_case": true
3
- }
 
1
+ {"do_lower_case": true, "model_max_length": 512}