TheBloke commited on
Commit
463eb0c
1 Parent(s): 91fcb4f

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -31,11 +31,13 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
31
 
32
  ## Prompt template: Alpaca
33
 
34
- ```Below is an instruction that describes a task. Write a response that appropriately completes the request.
 
35
 
36
  ### Instruction: {prompt}
37
 
38
  ### Response:
 
39
  ```
40
 
41
  ## Provided files
@@ -47,10 +49,10 @@ Each separate quant is in a different branch. See below for instructions on fet
47
  | Branch | Filename | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With |
48
  | ------ | -------- | ---- | ---------- | -------------------- | --------- | ------------------- | --------- |
49
  | main | open-llama-7b-v2-open-instruct-GPTQ-4bit-128g.no-act.order.safetensors | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa |
50
- | gptq-4bit-32g-actorder_True | gptq_model-4bit-32g.safetensors | 4 | 32 | True | 4.28 GB | True | GPTQ-for-LLaMa |
51
- | gptq-4bit-64g-actorder_True | gptq_model-4bit-64g.safetensors | 4 | 64 | True | 4.02 GB | True | GPTQ-for-LLaMa |
52
- | gptq-4bit-128g-actorder_True | gptq_model-4bit-128g.safetensors | 4 | 128 | True | 3.90 GB | True | GPTQ-for-LLaMa |
53
- | gptq-8bit--1g-actorder_True | gptq_model-8bit--1g.safetensors | 8 | -1 | True | 7.01 GB | False | GPTQ-for-LLaMa |
54
 
55
 
56
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
 
31
 
32
  ## Prompt template: Alpaca
33
 
34
+ ```
35
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
36
 
37
  ### Instruction: {prompt}
38
 
39
  ### Response:
40
+
41
  ```
42
 
43
  ## Provided files
 
49
  | Branch | Filename | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With |
50
  | ------ | -------- | ---- | ---------- | -------------------- | --------- | ------------------- | --------- |
51
  | main | open-llama-7b-v2-open-instruct-GPTQ-4bit-128g.no-act.order.safetensors | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa |
52
+ | gptq-4bit-32g-actorder_True | gptq_model-4bit-32g.safetensors | 4 | 32 | True | 4.28 GB | True | AutoGPTQ |
53
+ | gptq-4bit-64g-actorder_True | gptq_model-4bit-64g.safetensors | 4 | 64 | True | 4.02 GB | True | AutoGPTQ |
54
+ | gptq-4bit-128g-actorder_True | gptq_model-4bit-128g.safetensors | 4 | 128 | True | 3.90 GB | True | AutoGPTQ |
55
+ | gptq-8bit--1g-actorder_True | gptq_model-8bit--1g.safetensors | 8 | -1 | True | 7.01 GB | False | AutoGPTQ |
56
 
57
 
58
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).