llama Failed to load fp16 & q8

#1
by money82 - opened

llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found
llama_load_model_from_file: failed to load model

Qwen org

this model uses tied word embeddings; newer llama.cpp should support it

Sign up or log in to comment