hamel commited on
Commit
20090db
1 Parent(s): 86a82da

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 4bit AWQ Quantized Version of [parlance-labs/hc-mistral-alpaca-merged](https://huggingface.co/parlance-labs/hc-mistral-alpaca-merged)
2
+
3
+ I used AutoAWQ
4
+
5
+ ```python
6
+ from awq import AutoAWQForCausalLM
7
+ from transformers import AutoTokenizer
8
+
9
+ # setup
10
+ quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
11
+ quant_path="hc-mistral-alpaca-merged-awq"
12
+ model_path="parlance-labs/hc-mistral-alpaca-merged"
13
+ model = AutoAWQForCausalLM.from_pretrained(model_path, **{"low_cpu_mem_usage": True})
14
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
15
+
16
+ # quantize and save model
17
+ model.quantize(tokenizer, quant_config=quant_config)
18
+ model.save_quantized(quant_path)
19
+ tokenizer.save_pretrained(quant_path)
20
+ ```
21
+
22
+ After you save the model you can upload it to the hub
23
+
24
+ ```bash
25
+ cd hc-mistral-alpaca-merged-awq
26
+ huggingface-cli upload parlance-labs/hc-mistral-alpaca-merged-awq .
27
+ ```