GGUF
axolotl
finetune
qlora
LoneStriker commited on
Commit
881ad1b
β€’
1 Parent(s): 3d45341

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,35 +1,9 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ Newton-7B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
2
+ Newton-7B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
3
+ Newton-7B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
4
+ Newton-7B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
5
+ Newton-7B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
6
+ Newton-7B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
7
+ Newton-7B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
8
+ Newton-7B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
9
+ Newton-7B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Newton-7B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9ecdf14b5e233dce50b3fa2eea042562f9c4c76db7232ee368abe9d17239956
3
+ size 3822034912
Newton-7B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d726b3a8f8422bc3544454ac512147593bdb0e655f70a056c9898c499cb86b5
3
+ size 3518996448
Newton-7B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3f864eee84ca185c7347de9706d23a82a015d097961811a87c1a38539851994
3
+ size 3164577760
Newton-7B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71c0bb8034d9006a41e7e860df92d6fa152e21b86f4131eea0e773c6af9eec7d
3
+ size 4368450592
Newton-7B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d234e6d9ce140be4907bb6253f7320a3732893004cb83ec0e59298b9e7666aa6
3
+ size 4140385312
Newton-7B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ce10cc4991aa3cadb19c059c95b51dadc61ee37fb55a12e8869491172ff1ee1
3
+ size 5131421728
Newton-7B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bac35480ac47fc1f96544585df7b658c2d30c9a40e514bca161ac37f36df0a3
3
+ size 4997728288
Newton-7B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:247963c15775d5eefc853919475c2262d0a5f46cb5ea5f8f0a52f084f7e10225
3
+ size 5942078560
Newton-7B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f50397058db6873deeeb12e1501af102c35976b8f8cefd971238810bfe5b8525
3
+ size 7695875040
README.md ADDED
@@ -0,0 +1,235 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - axolotl
5
+ - finetune
6
+ - qlora
7
+ base_model: openchat/openchat-3.5-0106
8
+ datasets:
9
+ - hendrycks/competition_math
10
+ - allenai/ai2_arc
11
+ - camel-ai/physics
12
+ - camel-ai/chemistry
13
+ - camel-ai/biology
14
+ - camel-ai/math
15
+ - STEM-AI-mtl/Electrical-engineering
16
+ - openbookqa
17
+ - piqa
18
+ - metaeval/reclor
19
+ - mandyyyyii/scibench
20
+ - derek-thomas/ScienceQA
21
+ - sciq
22
+ - TIGER-Lab/ScienceEval
23
+ ---
24
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/aimTTdmut59aZxOWQlkcC.jpeg)
25
+
26
+ # πŸ”¬πŸ‘©β€πŸ”¬ Newton-7B
27
+
28
+ This model is a fine-tuned version of [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) on datasets related to science.
29
+
30
+ This model is fine-tuned using [QLoRa](https://arxiv.org/abs/2305.14314) and [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
31
+
32
+ This model's training was sponsored by [sablo.ai](https://sablo.ai).
33
+
34
+ <details><summary>See axolotl config</summary>
35
+
36
+ axolotl version: `0.3.0`
37
+ ```yaml
38
+ base_model: openchat/openchat-3.5-0106
39
+ model_type: MistralForCausalLM
40
+ tokenizer_type: LlamaTokenizer
41
+ is_mistral_derived_model: true
42
+
43
+ load_in_8bit: false
44
+ load_in_4bit: true
45
+ strict: false
46
+
47
+
48
+ datasets:
49
+ - path: merged_all.json
50
+ type:
51
+ field_instruction: instruction
52
+ field_output: output
53
+
54
+ format: "GPT4 Correct User: {instruction}<|end_of_turn|>GPT4 Correct Assistant:"
55
+ no_input_format: "GPT4 Correct User: {instruction}<|end_of_turn|>GPT4 Correct Assistant:"
56
+
57
+
58
+ dataset_prepared_path: last_run_prepared
59
+ val_set_size: 0.01 # not sure
60
+ output_dir: ./newton
61
+
62
+ adapter: qlora
63
+ lora_model_dir:
64
+
65
+ sequence_len: 8192
66
+ sample_packing: true
67
+ pad_to_sequence_len: true
68
+
69
+ lora_r: 128
70
+ lora_alpha: 64
71
+ lora_dropout: 0.05
72
+ lora_target_linear: true
73
+ lora_fan_in_fan_out:
74
+ lora_target_modules:
75
+ - gate_proj
76
+ - down_proj
77
+ - up_proj
78
+ - q_proj
79
+ - v_proj
80
+ - k_proj
81
+ - o_proj
82
+ lora_modules_to_save:
83
+ - embed_tokens
84
+ - lm_head
85
+
86
+ wandb_project: huggingface
87
+ wandb_entity:
88
+ wandb_watch:
89
+ wandb_name:
90
+ wandb_log_model:
91
+
92
+ hub_model_id: Weyaxi/newton-lora
93
+ save_safetensors: true
94
+
95
+ # change #
96
+ gradient_accumulation_steps: 12
97
+ micro_batch_size: 6
98
+ num_epochs: 2
99
+ optimizer: adamw_bnb_8bit
100
+ lr_scheduler: cosine
101
+ learning_rate: 0.0002
102
+ # change #
103
+
104
+ train_on_inputs: false
105
+ group_by_length: false
106
+ bf16: true
107
+ fp16: false
108
+ tf32: false
109
+
110
+ gradient_checkpointing: true
111
+ early_stopping_patience:
112
+ resume_from_checkpoint:
113
+ local_rank:
114
+ logging_steps: 1
115
+ xformers_attention:
116
+ flash_attention: true
117
+
118
+ warmup_steps: 10 # not sure
119
+
120
+ saves_per_epoch: 2
121
+
122
+ evals_per_epoch: 4
123
+ eval_table_size:
124
+ eval_table_max_new_tokens: 128
125
+
126
+ debug:
127
+ deepspeed:
128
+ weight_decay: 0.1 # not sure
129
+ fsdp:
130
+ fsdp_config:
131
+ special_tokens:
132
+ bos_token: "<s>"
133
+ eos_token: "</s>"
134
+ unk_token: "<unk>"
135
+ tokens:
136
+ - "<|end_of_turn|>"
137
+ - "<|pad_0|>"
138
+ ```
139
+
140
+ </details><br>
141
+
142
+ # πŸ“Š Datasets
143
+
144
+ You can find the dataset I used and the work I am doing with this datasets here:
145
+
146
+ https://huggingface.co/datasets/Weyaxi/sci-datasets
147
+
148
+ Following datasets were used in this model:
149
+
150
+ - πŸ“ [MATH](https://huggingface.co/datasets/hendrycks/competition_math)
151
+
152
+ - 🧠 [ARC](https://huggingface.co/datasets/allenai/ai2_arc) (Note: Only **train** part)
153
+
154
+ - 🧲 [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
155
+
156
+ - βš—οΈ [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
157
+
158
+ - 🦠 [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
159
+
160
+ - πŸ“Š [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
161
+
162
+ - ⚑ [STEM-AI-mtl/Electrical-engineering](https://huggingface.co/datasets/STEM-AI-mtl/Electrical-engineering)
163
+
164
+ - πŸ“š [openbookqa](https://huggingface.co/datasets/openbookqa)
165
+
166
+ - 🧠 [piqa](https://huggingface.co/datasets/piqa)
167
+
168
+ - 🎨 [reclor](https://huggingface.co/datasets/metaeval/reclor)
169
+
170
+ - πŸ”¬ [scibench](https://github.com/mandyyyyii/scibench)
171
+
172
+ - πŸ§ͺ [ScienceQA](https://huggingface.co/datasets/derek-thomas/ScienceQA)
173
+
174
+ - 🧬 [sciq](https://huggingface.co/datasets/sciq)
175
+
176
+ - πŸ“ [ScienceEval](https://huggingface.co/datasets/TIGER-Lab/ScienceEval)
177
+
178
+ ## πŸ› οΈ Multiple Choice Question & Answer Datasets Conversion Progress
179
+
180
+ I used [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) to generate a reasonable and logical answer by providing it with the question and the answer key.
181
+
182
+ I used the [Together AI](https://www.together.ai) API for this task.
183
+
184
+ The following datasets are converted using this method:
185
+
186
+ - 🧠 [ARC](https://huggingface.co/datasets/allenai/ai2_arc) (Note: Only **train** part)
187
+
188
+ - πŸ“š [openbookqa](https://huggingface.co/datasets/openbookqa)
189
+
190
+ - 🎨 [reclor](https://huggingface.co/datasets/metaeval/reclor)
191
+
192
+ - 🧬 [sciq](https://huggingface.co/datasets/sciq)
193
+
194
+ # πŸ’¬ Prompt Template
195
+
196
+ You can use this prompt template while using the model:
197
+
198
+ ### GPT4 Correct [(Openchat)](https://huggingface.co/openchat/openchat-3.5-0106#conversation-templates)
199
+
200
+ ```
201
+ GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant: {asistant}<|end_of_turn|>GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant:
202
+ ```
203
+
204
+ You can also utilize the chat template method from the tokenizer config like here:
205
+
206
+ ```python
207
+ messages = [
208
+ {"role": "user", "content": "Hello"},
209
+ {"role": "assistant", "content": "Hi"},
210
+ {"role": "user", "content": "How are you today?"}
211
+ ]
212
+ tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
213
+ ```
214
+
215
+ # 🀝 Acknowledgments
216
+
217
+ Thanks to [openchat](https://huggingface.co/openchat) team for fine-tuning an excellent model that I used as a base model.
218
+
219
+ Thanks to [@jondurbin](https://huggingface.co/jondurbin) for reformatting codes for some datasets: [bagel/data_sources](https://github.com/jondurbin/bagel/tree/main/bagel/data_sources)
220
+
221
+ Thanks to [Together AI](https://www.together.ai) for providing everyone with free credits, which I used to generate a dataset in multiple choice to explanations format.
222
+
223
+ Thanks to [Tim Dettmers](https://huggingface.co/timdettmers) for his excellent [QLoRA](https://arxiv.org/abs/2305.14314) work.
224
+
225
+ Thanks to all the dataset authors mentioned in the datasets section.
226
+
227
+ Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
228
+
229
+ Overall, thanks to all of the open soure AI community! πŸš€
230
+
231
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
232
+
233
+ If you would like to support me:
234
+
235
+ [β˜• Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
huggingface-metadata.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ url: https://huggingface.co/Weyaxi/Newton-7B
2
+ branch: main
3
+ download date: 2024-02-01 00:42:51
4
+ sha256sum:
5
+ 008cd7cdcec4904aec424162611fbd7831f55c4a58cde3039707f956b735233e model-00001-of-00003.safetensors
6
+ ba7aa65e46edd121ff5eb051467ba684a7d5b69efb3eac967fbb7122e34d7dd0 model-00002-of-00003.safetensors
7
+ 2d2709fc008ce04e9fb9619a2e88901f21af2f060bf57ea4d5c2565fd75e3df3 model-00003-of-00003.safetensors
8
+ dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055 tokenizer.model