dfurman commited on
Commit
6886580
1 Parent(s): a585f92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -1
README.md CHANGED
@@ -1,9 +1,195 @@
1
  ---
 
2
  library_name: peft
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
- ### Framework versions
 
 
 
 
 
 
 
 
 
 
7
 
 
8
 
9
  - PEFT 0.6.0.dev0
 
 
1
  ---
2
+ license: unknown
3
  library_name: peft
4
+ tags:
5
+ - mistral
6
+ datasets:
7
+ - ehartford/dolphin
8
+ - garage-bAInd/Open-Platypus
9
+ inference: false
10
+ pipeline_tag: text-generation
11
+ base_model: mistralai/Mistral-7B-v0.1
12
  ---
13
+
14
+ # mistral-7b-instruct-peft
15
+
16
+ This instruction model was built via parameter-efficient QLoRA finetuning of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 2 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
17
+
18
+ ## Benchmark metrics
19
+
20
+ | Metric | Value |
21
+ |-----------------------|-------|
22
+ | MMLU (5-shot) | Coming |
23
+ | ARC (25-shot) | Coming |
24
+ | HellaSwag (10-shot) | Coming |
25
+ | TruthfulQA (0-shot) | Coming |
26
+ | Avg. | Coming |
27
+
28
+ We use Eleuther.AI's [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests below, the same version as Hugging Face's [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
29
+
30
+ ## Helpful links
31
+
32
+ * Model license: Apache 2.0
33
+ * Basic usage: [here]()
34
+ * Finetuning code: [here]()
35
+ * Runtime stats: [here]()
36
+
37
+ ## Loss curve
38
+
39
+ coming
40
+
41
+ The above loss curve was generated from the run's private wandb.ai log.
42
+
43
+ ## Example prompts and responses
44
+
45
+ Example 1:
46
+
47
+ **User**:
48
+ > Write me a numbered list of things to do in New York City.
49
+
50
+ **mistral-7b-instruct-peft**:
51
+
52
+ coming
53
+
54
+ <br>
55
+
56
+ Example 2:
57
+
58
+ **User**:
59
+
60
+ > Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.
61
+
62
+ **mistral-7b-instruct-peft**:
63
+
64
+ coming
65
+
66
+ <br>
67
+
68
+ Example 3:
69
+
70
+ **User**:
71
+
72
+ > What is a good recipe for vegan banana bread?
73
+
74
+ **mistral-7b-instruct-peft**:
75
+
76
+ coming
77
+
78
+ <br>
79
+
80
+ ## Limitations and biases
81
+
82
+ _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
83
+
84
+ This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
85
+ This model was trained on various public datasets.
86
+ While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
87
+
88
+ ## Basic usage
89
+
90
+ ```python
91
+ !pip install -q -U huggingface_hub peft transformers torch accelerate
92
+ ```
93
+
94
+ ```python
95
+ from huggingface_hub import notebook_login
96
+ import torch
97
+ from peft import PeftModel, PeftConfig
98
+ from transformers import (
99
+ AutoModelForCausalLM,
100
+ AutoTokenizer,
101
+ BitsAndBytesConfig,
102
+ pipeline,
103
+ )
104
+
105
+ notebook_login()
106
+ ```
107
+
108
+ ```python
109
+ peft_model_id = "dfurman/mistral-7b-instruct-peft"
110
+ config = PeftConfig.from_pretrained(peft_model_id)
111
+
112
+ bnb_config = BitsAndBytesConfig(
113
+ load_in_4bit=True,
114
+ bnb_4bit_quant_type="nf4",
115
+ bnb_4bit_compute_dtype=torch.bfloat16,
116
+ )
117
+
118
+ model = AutoModelForCausalLM.from_pretrained(
119
+ config.base_model_name_or_path,
120
+ quantization_config=bnb_config,
121
+ use_auth_token=True,
122
+ device_map="auto",
123
+ )
124
+
125
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_fast=True)
126
+ tokenizer.pad_token = tokenizer.eos_token
127
+
128
+ model = PeftModel.from_pretrained(model, peft_model_id)
129
+
130
+ format_template = "You are a helpful assistant. Write a response that appropriately completes the request. {query}\n"
131
+ ```
132
+
133
+ ```python
134
+ # First, format the prompt
135
+ query = "Tell me a recipe for vegan banana bread."
136
+ prompt = format_template.format(query=query)
137
+
138
+ # Inference can be done using model.generate
139
+ print("\n\n*** Generate:")
140
+
141
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
142
+ with torch.autocast("cuda", dtype=torch.bfloat16):
143
+ output = model.generate(
144
+ input_ids=input_ids,
145
+ max_new_tokens=512,
146
+ do_sample=True,
147
+ temperature=0.7,
148
+ return_dict_in_generate=True,
149
+ eos_token_id=tokenizer.eos_token_id,
150
+ pad_token_id=tokenizer.pad_token_id,
151
+ repetition_penalty=1.2,
152
+ )
153
+
154
+ print(tokenizer.decode(output["sequences"][0], skip_special_tokens=True))
155
+ ```
156
+
157
+ ## Runtime tests
158
+
159
+ | runtime / 50 tokens (sec) | GPU | attn | torch dtype | VRAM (GB) |
160
+ |:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
161
+ | 3.1 | 1x A100 (40 GB SXM) | torch | fp16 | 13 |
162
+
163
+
164
+ ## Acknowledgements
165
+
166
+ This model was finetuned by Daniel Furman on Sep 27, 2023 and is for research applications only.
167
+
168
+ ## Disclaimer
169
+
170
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
171
+
172
+ ## mistralai/Mistral-7B-v0.1 citation
173
+
174
+ ```
175
+ coming
176
+ ```
177
+
178
  ## Training procedure
179
 
180
+ The following `bitsandbytes` quantization config was used during training:
181
+ - quant_method: bitsandbytes
182
+ - load_in_8bit: False
183
+ - load_in_4bit: True
184
+ - llm_int8_threshold: 6.0
185
+ - llm_int8_skip_modules: None
186
+ - llm_int8_enable_fp32_cpu_offload: False
187
+ - llm_int8_has_fp16_weight: False
188
+ - bnb_4bit_quant_type: nf4
189
+ - bnb_4bit_use_double_quant: False
190
+ - bnb_4bit_compute_dtype: bfloat16
191
 
192
+ ## Framework versions
193
 
194
  - PEFT 0.6.0.dev0
195
+