latimar commited on
Commit
e6cb38b
1 Parent(s): 9c2298d

Update README

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -16,6 +16,16 @@ to [EXL2](https://github.com/turboderp/exllamav2#exl2-quantization) format.
16
  Converted with the ExllamaV2 [convert.py](https://github.com/turboderp/exllamav2/blob/master/convert.py) script,
17
  exllamav2 [commit](https://github.com/turboderp/exllamav2/commit/31f31e1b08eeccf4a5ab31fd202ef3100dce8d22)
18
 
 
 
 
 
 
 
 
 
 
 
19
  ## Datasets used for calibration and PPL measurement
20
 
21
  * [Calibration](https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k)
@@ -70,12 +80,3 @@ Vanilla Mistral-7B INT8 scores **27.43**
70
 
71
  [EXL2 3.2-bpw quant](https://huggingface.co/firelzrd/Phind-CodeLlama-34B-v2-exl2/tree/3_2-bpw) of this model by [firelzrd](https://huggingface.co/firelzrd)
72
  scores **60.97**.
73
-
74
-
75
- | BPW (hb=8) | HumanEval | Evol-Ins PPL | Wiki PPL | File Size (Gb) |
76
- | ----------- | --------- | ------------ | ---------- | -------------- |
77
- | 2.55 | **40.24** | 2.0944 | 18.9843 | 10.62 |
78
- | 2.8 | **63.41** | 2.0814 | 17.6326 | 11.58 |
79
- | 3.0 | **66.46** | 2.0600 | 11.2096 | 12.36 |
80
- | 4.625 | **70.12** | 2.0401 | 6.7243 | 18.63 |
81
- | 4.8 | **70.73** | 2.0361 | 6.7263 | 19.32 |
 
16
  Converted with the ExllamaV2 [convert.py](https://github.com/turboderp/exllamav2/blob/master/convert.py) script,
17
  exllamav2 [commit](https://github.com/turboderp/exllamav2/commit/31f31e1b08eeccf4a5ab31fd202ef3100dce8d22)
18
 
19
+ Original model in full weights achieves **73.8** HumanEval score. Here are EXL2 quants scores:
20
+
21
+ | BPW (hb=8) | HumanEval | Evol-Ins PPL | Wiki PPL | File Size (Gb) |
22
+ | ----------- | --------- | ------------ | ---------- | -------------- |
23
+ | 2.55 | **40.24** | 2.0944 | 18.9843 | 10.62 |
24
+ | 2.8 | **63.41** | 2.0814 | 17.6326 | 11.58 |
25
+ | 3.0 | **66.46** | 2.0600 | 11.2096 | 12.36 |
26
+ | 4.625 | **70.12** | 2.0401 | 6.7243 | 18.63 |
27
+ | 4.8 | **70.73** | 2.0361 | 6.7263 | 19.32 |
28
+
29
  ## Datasets used for calibration and PPL measurement
30
 
31
  * [Calibration](https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k)
 
80
 
81
  [EXL2 3.2-bpw quant](https://huggingface.co/firelzrd/Phind-CodeLlama-34B-v2-exl2/tree/3_2-bpw) of this model by [firelzrd](https://huggingface.co/firelzrd)
82
  scores **60.97**.