runtime error

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. tokenizer_config.json: 0%| | 0.00/905 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 905/905 [00:00<00:00, 4.42MB/s] vocab.json: 0%| | 0.00/1.77M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.77M/1.77M [00:00<00:00, 29.2MB/s] merges.txt: 0%| | 0.00/1.23M [00:00<?, ?B/s] merges.txt: 100%|██████████| 1.23M/1.23M [00:00<00:00, 51.8MB/s] special_tokens_map.json: 0%| | 0.00/582 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 582/582 [00:00<00:00, 3.44MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 6, in <module> model = AutoModelForCausalLM.from_pretrained("ai-forever/ruGPT-3.5-13B") File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1259, in __getattribute__ requires_backends(cls, cls._backends) File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1247, in requires_backends raise ImportError("".join(failed)) ImportError: AutoModelForCausalLM requires the PyTorch library but it was not found in your environment. Checkout the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.

Container logs:

Fetching error logs...