mobicham commited on
Commit
c3666a9
1 Parent(s): 4859068

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md CHANGED
@@ -1,3 +1,110 @@
1
  ---
2
  license: llama2
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ train: false
4
+ inference: false
5
+ pipeline_tag: text-generation
6
  ---
7
+
8
+ This is an experimental <a href="https://github.com/mobiusml/hqq/">HQQ</a> 1-bit quantized (<b>binary weights</b>) <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf"> Llama2-7B-chat model </a> using a LoRA adapter to improve the performance (referred to as HQQ+).
9
+
10
+ Quantizing small models at extreme low-bits is a challenging task. The purpose of this model is to show the community what to expect when fine-tuning such models.
11
+
12
+ ## Datasets
13
+ The adapter was trained via SFT on random subsets of the following:
14
+
15
+ ### Base Model
16
+ * <a href="https://huggingface.co/datasets/wikitext-2-raw-v1">wikitext-2-raw-v1</a> (full)
17
+
18
+ ### Chat Model
19
+ * <a href="https://huggingface.co/datasets/timdettmers/openassistant-guana"> timdettmers/openassistant-guanaco </a> (full)
20
+ * <a href="https://huggingface.co/datasets/icrosoft/orca-math-word-problems-200k"> microsoft/orca-math-word-problems-200k </a> (25K)
21
+ * <a href="https://huggingface.co/datasets/meta-math/MetaMathQA"> meta-math/MetaMathQA </a> (25K)
22
+ * <a href="https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized"> HuggingFaceH4/ultrafeedback_binarized </a> (25K - chosen answers only)
23
+
24
+ ## Performance
25
+ | Models | Llama2-7B (fp16)| Llama2-7B (HQQ-1bit)| Llama2-7B (HQQ+-1bit)| Quip# (2bit)|
26
+ |-------------------|------------------|------------------|------------------|------------------|
27
+ | Wiki Perpexlity | 5.18 | 9866 | <b>8.53</b> | 8.54 |
28
+ | VRAM (GB) | 13.5 | <b>1.76</b> | 1.85 | 2.72 |
29
+ | forward time (sec)| <b>0.1<b> | 0.231 | 0.257 | 0.353 |
30
+
31
+ | Models | Llama2-7B-chat (fp16)| Llama2-7B-chat (HQQ-1bit)| Llama2-7B-chat (HQQ+-1bit)|
32
+ |-------------------|------------------|------------------|------------------|
33
+ | ARC (25-shot) | 53.67 | 21.59 | 31.14 |
34
+ | HellaSwag (10-shot)| 78.56 | 25.66 | 52.96 |
35
+ | MMLU (5-shot) | 48.16 | | 26.54 |
36
+ | TruthfulQA-MC2 | 45.32 | 47.81 | 43.16 |
37
+ | Winogrande (5-shot)| 72.53 | 49.72 | 60.54 |
38
+ | GSM8K (5-shot) | 23.12 | | 11 |
39
+ | Average | 53.56 | | 37.56 |
40
+
41
+ ## Usage
42
+ First, install the latest version of <a href="https://github.com/mobiusml/hqq/">HQQ</a>:
43
+ ```
44
+ pip install git+https://github.com/mobiusml/hqq.git
45
+ ```
46
+ Then you can use the sample code below:
47
+ ``` Python
48
+ from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
49
+
50
+ #Load the model
51
+ model_id = 'mobiuslabsgmbh/Llama-2-7b-chat-hf_1bitgs8_hqq'
52
+ model = HQQModelForCausalLM.from_quantized(model_id, adapter='adapter_v0.1.lora')
53
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
54
+
55
+ #Setup Inference Mode
56
+ tokenizer.add_bos_token = False
57
+ tokenizer.add_eos_token = False
58
+ if not tokenizer.pad_token: tokenizer.add_special_tokens({'pad_token': '[PAD]'})
59
+ model.config.use_cache = True
60
+ model.eval();
61
+
62
+ # Optional: torch compile for faster inference
63
+ # model = torch.compile(model)
64
+
65
+ #Streaming Inference
66
+ import torch
67
+ from threading import Thread
68
+
69
+ def chat_processor(chat, max_new_tokens=100, do_sample=True):
70
+ tokenizer.use_default_system_prompt = False
71
+ streamer = transformers.TextIteratorStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
72
+
73
+ generate_params = dict(
74
+ tokenizer("<s> [INST] " + chat + " [/INST] ", return_tensors="pt").to(device),
75
+ streamer=streamer,
76
+ max_new_tokens=max_new_tokens,
77
+ do_sample=do_sample,
78
+ pad_token_id=tokenizer.pad_token_id,
79
+ top_p=0.90 if do_sample else None,
80
+ top_k=50 if do_sample else None,
81
+ temperature= 0.6 if do_sample else None,
82
+ num_beams=1,
83
+ repetition_penalty=1.2,
84
+ )
85
+
86
+ t = Thread(target=model.generate, kwargs=generate_params)
87
+ t.start()
88
+
89
+ print("User: ", chat);
90
+ print("Assistant: ");
91
+ outputs = ""
92
+ for text in streamer:
93
+ outputs += text
94
+ print(text, end="", flush=True)
95
+
96
+ torch.cuda.empty_cache()
97
+
98
+ return outputs
99
+ ```
100
+
101
+ ### Example
102
+ ``` Python
103
+ outputs = chat_processor("What is the solution to x^2 - 1 = 0", max_new_tokens=1000, do_sample=False)
104
+ ```
105
+ ```
106
+ User: What is the solution to x^2 - 1 = 0
107
+ Assistant:
108
+ The equation $x^2 - 1 = 0$ can be factored as $(x-1)(x+1) = 0$.
109
+ You want to find a value of $x$ that makes this true for all values of $x$. This means that either $x=1$ or $-1$, or $x=-1$. So, there are two solutions: $x=\boxed{1}$ and $x=\boxed{-1}$. The answer is: 1
110
+ ```