Text Generation
Transformers
Safetensors
llama
OpenAccess AI Collective
MPT
axolotl
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
53baf6e
1 Parent(s): fcc5634

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +21 -4
README.md CHANGED
@@ -1,6 +1,25 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  inference: false
3
  license: other
 
 
 
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -38,7 +57,6 @@ A chat between a curious user and an artificial intelligence assistant. The assi
38
 
39
  USER: {prompt}
40
  ASSISTANT:
41
-
42
  ```
43
 
44
  ## Provided files
@@ -51,8 +69,8 @@ Each separate quant is in a different branch. See below for instructions on fet
51
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
52
  | main | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
53
  | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
54
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
55
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
56
  | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
57
  | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
58
 
@@ -128,7 +146,6 @@ prompt_template=f'''A chat between a curious user and an artificial intelligence
128
 
129
  USER: {prompt}
130
  ASSISTANT:
131
-
132
  '''
133
 
134
  print("\n\n*** Generate:")
 
1
  ---
2
+ datasets:
3
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
4
+ - QingyiSi/Alpaca-CoT
5
+ - teknium/GPTeacher-General-Instruct
6
+ - metaeval/ScienceQA_text_only
7
+ - hellaswag
8
+ - openai/summarize_from_feedback
9
+ - riddle_sense
10
+ - gsm8k
11
+ - camel-ai/math
12
+ - camel-ai/biology
13
+ - camel-ai/physics
14
+ - camel-ai/chemistry
15
+ - winglian/evals
16
  inference: false
17
  license: other
18
+ model_type: llama
19
+ tags:
20
+ - OpenAccess AI Collective
21
+ - MPT
22
+ - axolotl
23
  ---
24
 
25
  <!-- header start -->
 
57
 
58
  USER: {prompt}
59
  ASSISTANT:
 
60
  ```
61
 
62
  ## Provided files
 
69
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
70
  | main | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
71
  | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
72
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
73
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
74
  | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
75
  | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
76
 
 
146
 
147
  USER: {prompt}
148
  ASSISTANT:
 
149
  '''
150
 
151
  print("\n\n*** Generate:")