Edit model card

Exllama v2 Quantizations of internlm2-chat-20b-sft-llama

Using turboderp's ExLlamaV2 v0.0.11 for quantization.

The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/internlm/internlm2-chat-20b-sft

Model Size: 20B

Branch Bits lm_head bits Dataset Size Description
6_5 6.5 8.0 Default 21.0 GB Near unquantized performance at vastly reduced size, recommended.
4_25 4.25 6.0 Default 15.2 GB GPTQ equivalent bits per weight, slightly higher quality.
3_5 3.5 6.0 Default 13.8 GB Lower quality, only use if you have to.
3_0 3.0 6.0 Default 12.5 GB Very low quality. Usable on 12GB if you reduce context or use 8 bit cache.

All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.

Download instructions

With git:

git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-20b-sft-llama-exl2 internlm2-chat-20b-sft-llama-exl2-6_5

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called internlm2-chat-20b-sft-llama-exl2:

mkdir internlm2-chat-20b-sft-llama-exl2
huggingface-cli download bartowski/internlm2-chat-20b-sft-llama-exl2 --local-dir internlm2-chat-20b-sft-llama-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir internlm2-chat-20b-sft-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-chat-20b-sft-llama-exl2 --revision 6_5 --local-dir internlm2-chat-20b-sft-llama-exl2-6_5 --local-dir-use-symlinks False
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .