Error when trying to load model using example code

#1
by HAvietisov - opened

when trying to load model as following :

from transformers import AutoModel
model = AutoModel.from_pretrained("TheBloke/CodeLlama-7B-GGML")

Getting this error :

OSError: TheBloke/CodeLlama-7B-GGML does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.

I guess description page lacks proper instructions on how to load the model. I will try something else meanwhile.

Transformers can't load GGML models.

Please use ctransformers or llama-cpp-python for loading GGML models from Python code. Both are linked in the README.

Ah snap, I confused GGML with GPTQ. My bad

HAvietisov changed discussion status to closed

Sign up or log in to comment