Does the result right?

#1
by lucasjin - opened

The other (coderllama GPTQ) model seems corrupted, the result not right.

Does this GPTQ weights is right?

I haven't tried this one but I have used TheBloke_CodeLlama-34B-Python-GPTQ_gptq-4bit-32g-actorder_True for several things and that model works fine.

WizardCoder-Python (this model) is an instruct tuned model while CodeLlama-34B-Python is a continuation model so there is no prompt template and you must write enough of whatever code you want that the model can see what it should finish. If you were running into trouble with the fact that CodeLlama-Python doesn't let you ask questions / give instructions in a template, WizardCoder might be a better choice for you!

thanks, but I preivous tried also a instructed model.

What diffference between this one and TheBloke_CodeLlama-34B-Python-GPTQ_gptq-4bit-32g-actorder_True ?

thanks, but I preivous tried also a instructed model.

What diffference between this one and TheBloke_CodeLlama-34B-Python-GPTQ_gptq-4bit-32g-actorder_True ?

I don't know about this wizard coder model but if you are looking for verified good performance check out the screenshots here using the Phind model: https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/discussions/1#64ead499e74f54587cd6336d

Sign up or log in to comment