smpanaro commited on
Commit
fdfa13e
1 Parent(s): b278f4b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - wikitext
5
+ ---
6
+
7
+ [pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) quantized to 4-bit using [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ).
8
+
9
+ To use, first install AutoGPTQ:
10
+
11
+ ```shell
12
+ pip install auto-gptq
13
+ ```
14
+
15
+ Then load the model from the hub:
16
+ ```python
17
+ from transformers import AutoModelForCausalLM, AutoTokenizer
18
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
19
+
20
+ model_name = "smpanaro/pythia-1b-AutoGPTQ-4bit-128g"
21
+ model = AutoGPTQForCausalLM.from_quantized(model_name)
22
+ ```
23
+
24
+
25
+ |Model|4-Bit Perplexity|16-Bit Perplexity|Delta|
26
+ |--|--|--|--|
27
+ |[smpanaro/pythia-160m-AutoGPTQ-4bit-128g](https://huggingface.co/smpanaro/pythia-160m-AutoGPTQ-4bit-128g)|33.4375|23.3024|10.1351|
28
+ |[smpanaro/pythia-410m-AutoGPTQ-4bit-128g](https://huggingface.co/smpanaro/pythia-410m-AutoGPTQ-4bit-128g)|21.4688|13.9838|7.485|
29
+ |smpanaro/pythia-1b-AutoGPTQ-4bit-128g|12.0391|11.6178|0.4213|
30
+
31
+
32
+ <sub>Wikitext perplexity measured as in the [huggingface docs](https://huggingface.co/docs/transformers/en/perplexity), lower is better</sub>