runtime error

[00:00<00:00, 13.3MB/s] vocab.json: 0%| | 0.00/3.71M [00:00<?, ?B/s] vocab.json: 100%|██████████| 3.71M/3.71M [00:00<00:00, 132MB/s] sentencepiece.bpe.model: 0%| | 0.00/2.42M [00:00<?, ?B/s] sentencepiece.bpe.model: 100%|██████████| 2.42M/2.42M [00:00<00:00, 121MB/s] special_tokens_map.json: 0%| | 0.00/1.56k [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 1.56k/1.56k [00:00<00:00, 12.5MB/s] The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'M2M100Tokenizer'. The class this function is called from is 'SMALL100Tokenizer'. Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100") File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2024, in from_pretrained return cls._from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2256, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/user/app/tokenization_small100.py", line 148, in __init__ super().__init__( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 367, in __init__ self._add_tokens( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 467, in _add_tokens current_vocab = self.get_vocab().copy() File "/home/user/app/tokenization_small100.py", line 270, in get_vocab vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} File "/home/user/app/tokenization_small100.py", line 183, in vocab_size return len(self.encode) + len(self.lang_token_to_id) + self.num_madeup_words TypeError: object of type 'method' has no len()

Container logs:

Fetching error logs...