runtime error

10.5M/134M [00:00<00:01, 62.8MB/s] Downloading adapter_model.bin: 16%|█▌ | 21.0M/134M [00:00<00:01, 63.2MB/s] Downloading adapter_model.bin: 23%|██▎ | 31.5M/134M [00:00<00:02, 41.3MB/s] Downloading adapter_model.bin: 31%|███ | 41.9M/134M [00:00<00:02, 42.2MB/s] Downloading adapter_model.bin: 39%|███▉ | 52.4M/134M [00:01<00:01, 43.8MB/s] Downloading adapter_model.bin: 47%|████▋ | 62.9M/134M [00:01<00:01, 46.0MB/s] Downloading adapter_model.bin: 55%|█████▍ | 73.4M/134M [00:01<00:01, 49.8MB/s] Downloading adapter_model.bin: 62%|██████▏ | 83.9M/134M [00:01<00:01, 38.2MB/s] Downloading adapter_model.bin: 70%|███████ | 94.4M/134M [00:02<00:00, 45.0MB/s] Downloading adapter_model.bin: 78%|███████▊ | 105M/134M [00:02<00:00, 47.3MB/s] Downloading adapter_model.bin: 86%|████████▌ | 115M/134M [00:02<00:00, 41.6MB/s] Downloading adapter_model.bin: 100%|██████████| 134M/134M [00:03<00:00, 36.4MB/s] Downloading adapter_model.bin: 100%|██████████| 134M/134M [00:03<00:00, 41.8MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 25, in <module> tok = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1222, in __getattribute__ requires_backends(cls, cls._backends) File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1210, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

Container logs:

Fetching error logs...