Transformers
GGUF
English
phi-msft

Model not found

#2
by egodos - opened

Hello all,

I install in windows successfully:
CMAKE_ARGS="-DLLAMA_CUBLAS=on"
!pip install llama-cpp-python

But when loading the model i get: "ValueError: Model path does not exist: ./dolphin-2_6-phi-2.Q4_K_M.gguf"

Do I need to install a specific branch or something?

Thanks in advance

We may need more info then what you have supplied. My best guess is that is sounds like you may of supplied a path that doesn't work to your llama-cpp-python script/tool. Double check that the file is where you expect it, and that the path is correct that is intended to be supplied to the loading part of your python script/tool. Note: I'm assuming that your possibly manually scripting in python to use the python to c/c++ bindings library functions from the python library llama-cpp-python. (https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) If you are not, then you likely need to consult the authors of what ever you are using to load and do inference with the model.

Sign up or log in to comment