runtime error

| 0.00/34.6k [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 34.6k/34.6k [00:00<00:00, 87.8MB/s] special_tokens_map.json: 0%| | 0.00/2.19k [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 2.19k/2.19k [00:00<00:00, 15.8MB/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. preprocessor_config.json: 0%| | 0.00/339 [00:00<?, ?B/s] preprocessor_config.json: 100%|██████████| 339/339 [00:00<00:00, 2.09MB/s] config.json: 0%| | 0.00/1.39k [00:00<?, ?B/s] config.json: 100%|██████████| 1.39k/1.39k [00:00<00:00, 9.94MB/s] pytorch_model.bin: 0%| | 0.00/312M [00:00<?, ?B/s] pytorch_model.bin: 3%|▎ | 8.00M/312M [00:01<00:39, 7.66MB/s] pytorch_model.bin: 100%|█████████▉| 312M/312M [00:01<00:00, 252MB/s] generation_config.json: 0%| | 0.00/293 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 293/293 [00:00<00:00, 1.78MB/s] tokenizer_config.json: 0%| | 0.00/44.0 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 44.0/44.0 [00:00<00:00, 376kB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 32, in <module> translation_pipeline = pipeline(task="translation", model="Helsinki-NLP/opus-mt-zh-en") File "/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 967, in pipeline tokenizer = AutoTokenizer.from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 791, in from_pretrained raise ValueError( ValueError: This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed in order to use this tokenizer.

Container logs:

Fetching error logs...