I want to know how gguf converts this model.

#2
by hzjane - opened

I want to know how gguf converts this model. Does llama.cpp support to convert the mistral model? I tried to use openhermes-2.5-mistral-7b.Q4_0.gguf and found that its model_family is llama. Does this mean that I am running llama's model.

GGUFConfig["general.architecture"] is llama

Sign up or log in to comment