Adds the tokenizer configuration file

#12
by lysandre HF staff - opened
Facebook AI community org

The tokenizer configuration file is missing/incorrect and therefore leading to unforeseen errors after the migration of the canonical models.

Refer to the following issue for more information: transformers#29050

The current failing code is the following:

from transformers import AutoTokenizer

>>> previous_tokenizer = AutoTokenizer.from_pretrained("roberta-base")
>>> current_tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base")
>>> print(previous_tokenizer.model_max_length, current_tokenizer.model_max_length)
1000000000000000019884624838656, 512

This is the result after the fix:

from transformers import AutoTokenizer

>>> previous_tokenizer = AutoTokenizer.from_pretrained("roberta-base")
>>> current_tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base")
>>> print(previous_tokenizer.model_max_length, current_tokenizer.model_max_length)
512, 512
lysandre changed pull request status to open
lysandre changed pull request status to merged

Sign up or log in to comment