YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Exllama v2 Llama-3-8B-Instruct-ortho-v2
Using turboderp's ExLlamaV2 v0.0.21 for quantization.
The "main" branch only contains the measurement.json, download one of the other branches for the model
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model by hjhj3168
Calibration dataset: toxic-qna
Available sizes
Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
---|---|---|---|---|---|---|---|
8_0 | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
6_5 | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, recommended. |
Unable to determine this model's library. Check the
docs
.