Qubitium commited on
Commit
ce20d15
1 Parent(s): 734c7c6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - gptq
4
+ - 4bit
5
+ - int4
6
+ - gptqmodel
7
+ - modelcloud
8
+ - mistral
9
+ - instruct
10
+ ---
11
+ This model has been quantized using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
12
+
13
+ - **bits**: 4
14
+ - **group_size**: 128
15
+ - **desc_act**: true
16
+ - **static_groups**: false
17
+ - **sym**: true
18
+ - **lm_head**: false
19
+ - **damp_percent**: 0.0025
20
+ - **true_sequential**: true
21
+ - **model_name_or_path**: ""
22
+ - **model_file_base_name**: "model"
23
+ - **quant_method**: "gptq"
24
+ - **checkpoint_format**: "gptq"
25
+ - **meta**:
26
+ - **quantizer**: "gptqmodel:0.9.9-dev0"
27
+
28
+ **Here is an example:**
29
+ ```python
30
+ from transformers import AutoTokenizer
31
+ from gptqmodel import GPTQModel
32
+
33
+ model_name = "ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit"
34
+
35
+ prompt = [{"role": "user", "content": "I am in Shanghai, preparing to visit the natural history museum. Can you tell me the best way to"}]
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
38
+
39
+ model = GPTQModel.from_quantized(model_name)
40
+
41
+ input_tensor = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_tensors="pt")
42
+ outputs = model.generate(input_ids=input_tensor.to(model.device), max_new_tokens=100)
43
+ result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
44
+
45
+ print(result)
46
+ ```