runtime error

========== == CUDA == ========== CUDA Version 12.1.1 Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available. Use the NVIDIA Container Toolkit to start this container with GPU support; see https://docs.nvidia.com/datacenter/cloud-native/ . Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama_cpp.py", line 70, in _load_shared_library return ctypes.CDLL(str(_lib_path), **cdll_args) # type: ignore File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode) OSError: libcuda.so.1: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/app/app.py", line 4, in <module> from llama_cpp import Llama File "/usr/local/lib/python3.10/dist-packages/llama_cpp/__init__.py", line 1, in <module> from .llama_cpp import * File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama_cpp.py", line 83, in <module> _lib = _load_shared_library(_lib_base_name) File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama_cpp.py", line 72, in _load_shared_library raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}") RuntimeError: Failed to load shared library '/usr/local/lib/python3.10/dist-packages/llama_cpp/libllama.so': libcuda.so.1: cannot open shared object file: No such file or directory

Container logs:

Fetching error logs...