This is https://huggingface.co/WizardLM/WizardCoder-15B-V1.0 quantized to GGUF with llama.cpp b1698
k-quants are not supported by starcoder, they can be created but inference does not work
This is https://huggingface.co/WizardLM/WizardCoder-15B-V1.0 quantized to GGUF with llama.cpp b1698
k-quants are not supported by starcoder, they can be created but inference does not work