The Quantized CohereForAI/c4ai-command-r7b-12-2024 Model

Original Base Model: CohereForAI/c4ai-command-r7b-12-2024.
Link: https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024

Quantization Configurations

"quantization_config": {
    "bits": 4,
    "checkpoint_format": "gptq",
    "desc_act": true,
    "dynamic": null,
    "group_size": 128,
    "lm_head": false,
    "meta": {
      "damp_auto_increment": 0.0025,
      "damp_percent": 0.01,
      "mse": 0.0,
      "quantizer": [
        "gptqmodel:1.4.5"
      ],
      "static_groups": false,
      "true_sequential": true,
      "uri": "https://github.com/modelcloud/gptqmodel"
    },
    "quant_method": "gptq",
    "sym": true
  },

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
5
Safetensors
Model size
3.03B params
Tensor type
I32
·
BF16
·
FP16
·
Inference API
Unable to determine this model's library. Check the docs .