Edit model card

The Quantized Command R Plus Model

Original Base Model: CohereForAI/c4ai-command-r-plus.
Link: https://huggingface.co/CohereForAI/c4ai-command-r-plus

Special Notice

We use group_size=1024 to quantize a smaller model. For the default group_size=128, the model is also available here: https://huggingface.co/shuyuej/Command-R-Plus-GPTQ.

Quantization Configurations

"quantization_config": {
    "batch_size": 1,
    "bits": 4,
    "block_name_to_quantize": null,
    "cache_block_outputs": true,
    "damp_percent": 0.1,
    "dataset": null,
    "desc_act": false,
    "exllama_config": {
      "version": 1
    },
    "group_size": 1024,
    "max_input_length": null,
    "model_seqlen": null,
    "module_name_preceding_first_block": null,
    "modules_in_block_to_quantize": null,
    "pad_token_id": null,
    "quant_method": "gptq",
    "sym": true,
    "tokenizer": null,
    "true_sequential": true,
    "use_cuda_fp16": false,
    "use_exllama": true
  },

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
7
Safetensors
Model size
15.8B params
Tensor type
FP16
·
I32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including shuyuej/Command-R-Plus-Smaller-GPTQ