What would it take to convert this model to something that could run in llama.cpp?

#1
by dpflug - opened

I'm a bit shocked there are so few attempts at building an LLM with ethical source data. Kudos, and I wish you weren't seemingly alone in this.

I was trying to work out running this, but I only have Intel GPUs in everything. The example code here assumes a gpu and removing the device= argument didn't succeed. It seems it requires CUDA?

I managed to get other models working with llama.cpp, but this one won't convert and I'm not sure if it's because it's in an odd format or a bug in the converter.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment