runtime error
Downloading (…)cial_tokens_map.json: 0%| | 0.00/153 [00:00<?, ?B/s] Downloading (…)cial_tokens_map.json: 100%|██████████| 153/153 [00:00<00:00, 827kB/s] Downloading (…)okenizer_config.json: 0%| | 0.00/282 [00:00<?, ?B/s] Downloading (…)okenizer_config.json: 100%|██████████| 282/282 [00:00<00:00, 1.60MB/s] You are using the legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565 Traceback (most recent call last): File "/home/user/app/app.py", line 24, in <module> tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-medium") File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1841, in from_pretrained return cls._from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2004, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/user/.local/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py", line 190, in __init__ self.sp_model.Load(vocab_file) File "/home/user/.local/lib/python3.10/site-packages/sentencepiece/__init__.py", line 905, in Load return self.LoadFromFile(model_file) File "/home/user/.local/lib/python3.10/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) TypeError: not a string
Container logs:
Fetching error logs...