runtime error

0<00:00, 27.8MB/s] Downloading (…)tencepiece.bpe.model: 0%| | 0.00/2.42M [00:00<?, ?B/s] Downloading (…)tencepiece.bpe.model: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.42M/2.42M [00:00<00:00, 155MB/s] Downloading (…)cial_tokens_map.json: 0%| | 0.00/1.56k [00:00<?, ?B/s] Downloading (…)cial_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.56k/1.56k [00:00<00:00, 10.8MB/s] The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'M2M100Tokenizer'. The class this function is called from is 'SMALL100Tokenizer'. Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100") File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2045, in from_pretrained return cls._from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2256, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/user/app/tokenization_small100.py", line 148, in __init__ super().__init__( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 366, in __init__ self._add_tokens(self.all_special_tokens_extended, special_tokens=True) File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 462, in _add_tokens current_vocab = self.get_vocab().copy() File "/home/user/app/tokenization_small100.py", line 270, in get_vocab vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} File "/home/user/app/tokenization_small100.py", line 183, in vocab_size return len(self.encoder) + len(self.lang_token_to_id) + self.num_madeup_words AttributeError: 'SMALL100Tokenizer' object has no attribute 'encoder'. Did you mean: 'encode'?

Container logs:

Fetching error logs...