Suparious commited on
Commit
b3c7d95
1 Parent(s): 82fd536

Add model card

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md CHANGED
@@ -1,3 +1,112 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: text-generation
3
+ tags:
4
+ - merlinite
5
+ - mistral
6
+ - ibm
7
+ - lab
8
+ - labrador
9
+ - labradorite
10
+ - quantized
11
+ - 4-bit
12
+ - AWQ
13
+ - pytorch
14
+ - mistral
15
+ - instruct
16
+ - text-generation
17
+ - license:apache-2.0
18
+ - autotrain_compatible
19
+ - endpoints_compatible
20
+ - text-generation-inference
21
  license: apache-2.0
22
+ language:
23
+ - en
24
+ base_model: mistralai/Mistral-7B-v0.1
25
  ---
26
+ # ibm/merlinite-7b AWQ
27
+
28
+ - Model creator: [ibm](https://huggingface.co/ibm)
29
+ - Original model: [merlinite-7b](https://huggingface.co/ibm/merlinite-7b)
30
+
31
+ ![image/png](Merlinite.png)
32
+
33
+ ## Model Summary
34
+
35
+ [Paper](https://arxiv.org/abs/2403.01081)
36
+
37
+ - **Language(s):** Primarily English
38
+ - **License:** Apache 2.0
39
+ - **Base model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
40
+ - **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
41
+
42
+ ## How to use
43
+
44
+ ### Install the necessary packages
45
+
46
+ ```bash
47
+ pip install --upgrade autoawq autoawq-kernels
48
+ ```
49
+
50
+ ### Example Python code
51
+
52
+ ```python
53
+ from awq import AutoAWQForCausalLM
54
+ from transformers import AutoTokenizer, TextStreamer
55
+
56
+ model_path = "solidrust/merlinite-7b-AWQ"
57
+ system_message = "You are Newton, incarnated as a powerful AI."
58
+
59
+ # Load model
60
+ model = AutoAWQForCausalLM.from_quantized(model_path,
61
+ fuse_layers=True)
62
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
63
+ trust_remote_code=True)
64
+ streamer = TextStreamer(tokenizer,
65
+ skip_prompt=True,
66
+ skip_special_tokens=True)
67
+
68
+ # Convert prompt to tokens
69
+ prompt_template = """\
70
+ <|im_start|>system
71
+ {system_message}<|im_end|>
72
+ <|im_start|>user
73
+ {prompt}<|im_end|>
74
+ <|im_start|>assistant"""
75
+
76
+ prompt = "You're standing on the surface of the Earth. "\
77
+ "You walk one mile south, one mile west and one mile north. "\
78
+ "You end up exactly where you started. Where are you?"
79
+
80
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
81
+ return_tensors='pt').input_ids.cuda()
82
+
83
+ # Generate output
84
+ generation_output = model.generate(tokens,
85
+ streamer=streamer,
86
+ max_new_tokens=512)
87
+
88
+ ```
89
+
90
+ ### About AWQ
91
+
92
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
93
+
94
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
95
+
96
+ It is supported by:
97
+
98
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
99
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
100
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
101
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
102
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
103
+
104
+ ## Prompt template: ChatML
105
+
106
+ ```plaintext
107
+ <|im_start|>system
108
+ {system_message}<|im_end|>
109
+ <|im_start|>user
110
+ {prompt}<|im_end|>
111
+ <|im_start|>assistant
112
+ ```