RoversX's picture
Duplicate from RoversX/Nous-Hermes-Llama-2-7B-GGML
5c1f654
raw
history blame
264 Bytes
--extra-index-url https://pypi.ngc.nvidia.com
nvidia-cuda-runtime
nvidia-cublas
llama-cpp-python @ https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.77/llama_cpp_python-0.1.77-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
pyyaml
torch