Updates the tokenizer configuration file

#3
by lysandre HF staff - opened
BERT community org

The tokenizer configuration file is missing/incorrect and therefore leading to unforeseen errors after the migration of the canonical models.

Refer to the following issue for more information: transformers#29050

The current failing code is the following:

from transformers import AutoTokenizer

>>> previous_tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking")
>>> current_tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-large-uncased-whole-word-masking")
>>> print(previous_tokenizer.model_max_length, current_tokenizer.model_max_length)
1000000000000000019884624838656, 512

This is the result after the fix:

from transformers import AutoTokenizer

>>> previous_tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking")
>>> current_tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-large-uncased-whole-word-masking")
>>> print(previous_tokenizer.model_max_length, current_tokenizer.model_max_length)
512, 512
lysandre changed pull request status to open
lysandre changed pull request status to merged

Sign up or log in to comment