runtime error

1M [00:00<?, ?B/s] Downloading ice_text.model: 100%|██████████| 2.71M/2.71M [00:00<00:00, 17.8MB/s] Downloading ice_text.model: 100%|██████████| 2.71M/2.71M [00:00<00:00, 17.7MB/s] Traceback (most recent call last): File "app.py", line 14, in <module> tokenizer = AutoTokenizer.from_pretrained('THUDM/chatglm-6b-int4', trust_remote_code=True) File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 738, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2045, in from_pretrained return cls._from_pretrained( File "/home/user/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2256, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/user/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/6c5205c47d0d2f7ea2e44715d279e537cae0911f/tokenization_chatglm.py", line 196, in __init__ super().__init__( File "/home/user/.local/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 366, in __init__ self._add_tokens(self.all_special_tokens_extended, special_tokens=True) File "/home/user/.local/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 462, in _add_tokens current_vocab = self.get_vocab().copy() File "/home/user/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/6c5205c47d0d2f7ea2e44715d279e537cae0911f/tokenization_chatglm.py", line 248, in get_vocab vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} File "/home/user/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/6c5205c47d0d2f7ea2e44715d279e537cae0911f/tokenization_chatglm.py", line 244, in vocab_size return self.sp_tokenizer.num_tokens AttributeError: 'ChatGLMTokenizer' object has no attribute 'sp_tokenizer'

Container logs:

Fetching error logs...