Apel-sin commited on
Commit
fe578b6
1 Parent(s): 410b206

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -4
README.md CHANGED
@@ -1,7 +1,20 @@
1
- ---
2
- library_name: transformers
3
- license: llama3
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
  # Model Card for Llama-3-8B-Instruct-abliterated-v2
7
 
 
1
+ # Exllama v2 cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
2
+
3
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization.
4
+
5
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model</b>
6
+
7
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
8
+
9
+ Original model: <a href="https://huggingface.co/cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2">cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2</a><br>
10
+ Calibration dataset: <a href="https://huggingface.co/datasets/cosmicvalor/toxic-qna">toxic-qna</a>
11
+
12
+ ## Available sizes
13
+
14
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
15
+ | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
16
+ | [8_0](https://huggingface.co/Apel-sin/llama-3-8B-abliterated-v2-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
17
+ | [6_5](https://huggingface.co/Apel-sin/llama-3-8B-abliterated-v2-exl2//tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
18
 
19
  # Model Card for Llama-3-8B-Instruct-abliterated-v2
20