add model_max_length

#4

without specifying the model_max_length for the tokenizer defaults to a very large int, and inference crashes

I believe this may be due to a recent change to transformers - bert-base-uncased config was updated 2 months ago: https://huggingface.co/google-bert/bert-base-uncased/commit/86b5e0934494bd15c9632b12f734a8a67f723594

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment