Edit model card

This is a int8_float16 quantized version of WizardLM/WizardCoder-Python-7B-V1.0, quantized using ctranslate2 (see inference instructions there).

The license/caveats/intended usage is the same as the original model.
The quality of its output may have been negatively affected by the quantization process.

The command run to quantize the model was:

ct2-transformers-converter --model ./models-hf/WizardLM/WizardCoder-Python-7B-V1.0 --quantization int8_float16 --output_dir ./models-ct/WizardLM/WizardCoder-Python-7B-V1.0-ct2-int8_float16

The quantization was run on a 'high-mem', CPU only (8 core, 51GB) colab instance and took approximately 10 minutes.

Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .