RiversHaveWings
commited on
Commit
•
fccf749
1
Parent(s):
591afb3
Initial commit
Browse files- LICENSE +201 -0
- README.md +29 -0
- adapter_config.json +26 -0
- adapter_model.safetensors +3 -0
- make_evaluator.py +255 -0
- special_tokens_map.json +24 -0
- tokenizer.json +0 -0
- tokenizer.model +3 -0
- tokenizer_config.json +33 -0
LICENSE
ADDED
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Apache License
|
2 |
+
Version 2.0, January 2004
|
3 |
+
http://www.apache.org/licenses/
|
4 |
+
|
5 |
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
6 |
+
|
7 |
+
1. Definitions.
|
8 |
+
|
9 |
+
"License" shall mean the terms and conditions for use, reproduction,
|
10 |
+
and distribution as defined by Sections 1 through 9 of this document.
|
11 |
+
|
12 |
+
"Licensor" shall mean the copyright owner or entity authorized by
|
13 |
+
the copyright owner that is granting the License.
|
14 |
+
|
15 |
+
"Legal Entity" shall mean the union of the acting entity and all
|
16 |
+
other entities that control, are controlled by, or are under common
|
17 |
+
control with that entity. For the purposes of this definition,
|
18 |
+
"control" means (i) the power, direct or indirect, to cause the
|
19 |
+
direction or management of such entity, whether by contract or
|
20 |
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
21 |
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
22 |
+
|
23 |
+
"You" (or "Your") shall mean an individual or Legal Entity
|
24 |
+
exercising permissions granted by this License.
|
25 |
+
|
26 |
+
"Source" form shall mean the preferred form for making modifications,
|
27 |
+
including but not limited to software source code, documentation
|
28 |
+
source, and configuration files.
|
29 |
+
|
30 |
+
"Object" form shall mean any form resulting from mechanical
|
31 |
+
transformation or translation of a Source form, including but
|
32 |
+
not limited to compiled object code, generated documentation,
|
33 |
+
and conversions to other media types.
|
34 |
+
|
35 |
+
"Work" shall mean the work of authorship, whether in Source or
|
36 |
+
Object form, made available under the License, as indicated by a
|
37 |
+
copyright notice that is included in or attached to the work
|
38 |
+
(an example is provided in the Appendix below).
|
39 |
+
|
40 |
+
"Derivative Works" shall mean any work, whether in Source or Object
|
41 |
+
form, that is based on (or derived from) the Work and for which the
|
42 |
+
editorial revisions, annotations, elaborations, or other modifications
|
43 |
+
represent, as a whole, an original work of authorship. For the purposes
|
44 |
+
of this License, Derivative Works shall not include works that remain
|
45 |
+
separable from, or merely link (or bind by name) to the interfaces of,
|
46 |
+
the Work and Derivative Works thereof.
|
47 |
+
|
48 |
+
"Contribution" shall mean any work of authorship, including
|
49 |
+
the original version of the Work and any modifications or additions
|
50 |
+
to that Work or Derivative Works thereof, that is intentionally
|
51 |
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
52 |
+
or by an individual or Legal Entity authorized to submit on behalf of
|
53 |
+
the copyright owner. For the purposes of this definition, "submitted"
|
54 |
+
means any form of electronic, verbal, or written communication sent
|
55 |
+
to the Licensor or its representatives, including but not limited to
|
56 |
+
communication on electronic mailing lists, source code control systems,
|
57 |
+
and issue tracking systems that are managed by, or on behalf of, the
|
58 |
+
Licensor for the purpose of discussing and improving the Work, but
|
59 |
+
excluding communication that is conspicuously marked or otherwise
|
60 |
+
designated in writing by the copyright owner as "Not a Contribution."
|
61 |
+
|
62 |
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
63 |
+
on behalf of whom a Contribution has been received by Licensor and
|
64 |
+
subsequently incorporated within the Work.
|
65 |
+
|
66 |
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
67 |
+
this License, each Contributor hereby grants to You a perpetual,
|
68 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
69 |
+
copyright license to reproduce, prepare Derivative Works of,
|
70 |
+
publicly display, publicly perform, sublicense, and distribute the
|
71 |
+
Work and such Derivative Works in Source or Object form.
|
72 |
+
|
73 |
+
3. Grant of Patent License. Subject to the terms and conditions of
|
74 |
+
this License, each Contributor hereby grants to You a perpetual,
|
75 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
76 |
+
(except as stated in this section) patent license to make, have made,
|
77 |
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
78 |
+
where such license applies only to those patent claims licensable
|
79 |
+
by such Contributor that are necessarily infringed by their
|
80 |
+
Contribution(s) alone or by combination of their Contribution(s)
|
81 |
+
with the Work to which such Contribution(s) was submitted. If You
|
82 |
+
institute patent litigation against any entity (including a
|
83 |
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
84 |
+
or a Contribution incorporated within the Work constitutes direct
|
85 |
+
or contributory patent infringement, then any patent licenses
|
86 |
+
granted to You under this License for that Work shall terminate
|
87 |
+
as of the date such litigation is filed.
|
88 |
+
|
89 |
+
4. Redistribution. You may reproduce and distribute copies of the
|
90 |
+
Work or Derivative Works thereof in any medium, with or without
|
91 |
+
modifications, and in Source or Object form, provided that You
|
92 |
+
meet the following conditions:
|
93 |
+
|
94 |
+
(a) You must give any other recipients of the Work or
|
95 |
+
Derivative Works a copy of this License; and
|
96 |
+
|
97 |
+
(b) You must cause any modified files to carry prominent notices
|
98 |
+
stating that You changed the files; and
|
99 |
+
|
100 |
+
(c) You must retain, in the Source form of any Derivative Works
|
101 |
+
that You distribute, all copyright, patent, trademark, and
|
102 |
+
attribution notices from the Source form of the Work,
|
103 |
+
excluding those notices that do not pertain to any part of
|
104 |
+
the Derivative Works; and
|
105 |
+
|
106 |
+
(d) If the Work includes a "NOTICE" text file as part of its
|
107 |
+
distribution, then any Derivative Works that You distribute must
|
108 |
+
include a readable copy of the attribution notices contained
|
109 |
+
within such NOTICE file, excluding those notices that do not
|
110 |
+
pertain to any part of the Derivative Works, in at least one
|
111 |
+
of the following places: within a NOTICE text file distributed
|
112 |
+
as part of the Derivative Works; within the Source form or
|
113 |
+
documentation, if provided along with the Derivative Works; or,
|
114 |
+
within a display generated by the Derivative Works, if and
|
115 |
+
wherever such third-party notices normally appear. The contents
|
116 |
+
of the NOTICE file are for informational purposes only and
|
117 |
+
do not modify the License. You may add Your own attribution
|
118 |
+
notices within Derivative Works that You distribute, alongside
|
119 |
+
or as an addendum to the NOTICE text from the Work, provided
|
120 |
+
that such additional attribution notices cannot be construed
|
121 |
+
as modifying the License.
|
122 |
+
|
123 |
+
You may add Your own copyright statement to Your modifications and
|
124 |
+
may provide additional or different license terms and conditions
|
125 |
+
for use, reproduction, or distribution of Your modifications, or
|
126 |
+
for any such Derivative Works as a whole, provided Your use,
|
127 |
+
reproduction, and distribution of the Work otherwise complies with
|
128 |
+
the conditions stated in this License.
|
129 |
+
|
130 |
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
131 |
+
any Contribution intentionally submitted for inclusion in the Work
|
132 |
+
by You to the Licensor shall be under the terms and conditions of
|
133 |
+
this License, without any additional terms or conditions.
|
134 |
+
Notwithstanding the above, nothing herein shall supersede or modify
|
135 |
+
the terms of any separate license agreement you may have executed
|
136 |
+
with Licensor regarding such Contributions.
|
137 |
+
|
138 |
+
6. Trademarks. This License does not grant permission to use the trade
|
139 |
+
names, trademarks, service marks, or product names of the Licensor,
|
140 |
+
except as required for reasonable and customary use in describing the
|
141 |
+
origin of the Work and reproducing the content of the NOTICE file.
|
142 |
+
|
143 |
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
144 |
+
agreed to in writing, Licensor provides the Work (and each
|
145 |
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
146 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
147 |
+
implied, including, without limitation, any warranties or conditions
|
148 |
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
149 |
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
150 |
+
appropriateness of using or redistributing the Work and assume any
|
151 |
+
risks associated with Your exercise of permissions under this License.
|
152 |
+
|
153 |
+
8. Limitation of Liability. In no event and under no legal theory,
|
154 |
+
whether in tort (including negligence), contract, or otherwise,
|
155 |
+
unless required by applicable law (such as deliberate and grossly
|
156 |
+
negligent acts) or agreed to in writing, shall any Contributor be
|
157 |
+
liable to You for damages, including any direct, indirect, special,
|
158 |
+
incidental, or consequential damages of any character arising as a
|
159 |
+
result of this License or out of the use or inability to use the
|
160 |
+
Work (including but not limited to damages for loss of goodwill,
|
161 |
+
work stoppage, computer failure or malfunction, or any and all
|
162 |
+
other commercial damages or losses), even if such Contributor
|
163 |
+
has been advised of the possibility of such damages.
|
164 |
+
|
165 |
+
9. Accepting Warranty or Additional Liability. While redistributing
|
166 |
+
the Work or Derivative Works thereof, You may choose to offer,
|
167 |
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
168 |
+
or other liability obligations and/or rights consistent with this
|
169 |
+
License. However, in accepting such obligations, You may act only
|
170 |
+
on Your own behalf and on Your sole responsibility, not on behalf
|
171 |
+
of any other Contributor, and only if You agree to indemnify,
|
172 |
+
defend, and hold each Contributor harmless for any liability
|
173 |
+
incurred by, or claims asserted against, such Contributor by reason
|
174 |
+
of your accepting any such warranty or additional liability.
|
175 |
+
|
176 |
+
END OF TERMS AND CONDITIONS
|
177 |
+
|
178 |
+
APPENDIX: How to apply the Apache License to your work.
|
179 |
+
|
180 |
+
To apply the Apache License to your work, attach the following
|
181 |
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
182 |
+
replaced with your own identifying information. (Don't include
|
183 |
+
the brackets!) The text should be enclosed in the appropriate
|
184 |
+
comment syntax for the file format. We also recommend that a
|
185 |
+
file or class name and description of purpose be included on the
|
186 |
+
same "printed page" as the copyright notice for easier
|
187 |
+
identification within third-party archives.
|
188 |
+
|
189 |
+
Copyright [yyyy] [name of copyright owner]
|
190 |
+
|
191 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
192 |
+
you may not use this file except in compliance with the License.
|
193 |
+
You may obtain a copy of the License at
|
194 |
+
|
195 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
196 |
+
|
197 |
+
Unless required by applicable law or agreed to in writing, software
|
198 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
199 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
200 |
+
See the License for the specific language governing permissions and
|
201 |
+
limitations under the License.
|
README.md
CHANGED
@@ -1,3 +1,32 @@
|
|
1 |
---
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
library_name: peft
|
3 |
license: apache-2.0
|
4 |
---
|
5 |
+
# minihf_evaluator_openllama_7b
|
6 |
+
|
7 |
+
`minihf_evaluator_openllama_7b` is a LoRA instruct fine-tune of [OpenLLaMA 7B](https://huggingface.co/openlm-research/open_llama_7b).
|
8 |
+
|
9 |
+
The sequence `<|end|>` was used to separate the prompt and response. The correct way to prompt the model is: `Does 2 + 2 = 4?<|end|>`. The tokenizer will prepend a BOS token (`<s>`) by default. The response will end with an EOS token (`</s>`).
|
10 |
+
|
11 |
+
## Training procedure
|
12 |
+
|
13 |
+
`minihf_evaluator_openllama_7b` was fine-tuned for 100,000 examples on 90% [Muennighoff/flan](https://huggingface.co/datasets/Muennighoff/flan) / 10% [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) using batch size 4 per GPU on 8 40GB A100 GPUs. Examples where the prompt and response would not fit into 2,048 tokens were dropped. The fine-tuning was done using the following command:
|
14 |
+
|
15 |
+
```bash
|
16 |
+
accelerate launch make_evaluator.py --output-dir minihf_evaluator_openllama_7b
|
17 |
+
```
|
18 |
+
|
19 |
+
The following `bitsandbytes` quantization config was used during training:
|
20 |
+
- load_in_8bit: False
|
21 |
+
- load_in_4bit: True
|
22 |
+
- llm_int8_threshold: 6.0
|
23 |
+
- llm_int8_skip_modules: None
|
24 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
25 |
+
- llm_int8_has_fp16_weight: False
|
26 |
+
- bnb_4bit_quant_type: nf4
|
27 |
+
- bnb_4bit_use_double_quant: True
|
28 |
+
- bnb_4bit_compute_dtype: bfloat16
|
29 |
+
|
30 |
+
### Framework versions
|
31 |
+
|
32 |
+
- PEFT 0.4.0.dev0
|
adapter_config.json
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"base_model_name_or_path": "openlm-research/open_llama_7b",
|
3 |
+
"bias": "none",
|
4 |
+
"fan_in_fan_out": false,
|
5 |
+
"inference_mode": true,
|
6 |
+
"init_lora_weights": true,
|
7 |
+
"layers_pattern": null,
|
8 |
+
"layers_to_transform": null,
|
9 |
+
"lora_alpha": 8,
|
10 |
+
"lora_dropout": 0.0,
|
11 |
+
"modules_to_save": null,
|
12 |
+
"peft_type": "LORA",
|
13 |
+
"r": 32,
|
14 |
+
"revision": null,
|
15 |
+
"target_modules": [
|
16 |
+
"self_attn.q_proj",
|
17 |
+
"self_attn.k_proj",
|
18 |
+
"self_attn.v_proj",
|
19 |
+
"self_attn.o_proj",
|
20 |
+
"mlp.gate_proj",
|
21 |
+
"mlp.up_proj",
|
22 |
+
"mlp.down_proj",
|
23 |
+
"lm_head"
|
24 |
+
],
|
25 |
+
"task_type": null
|
26 |
+
}
|
adapter_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4a4a1be941defab8d86563b05e463faca7e1b21b8a81659649dedbe24ad780c4
|
3 |
+
size 324496576
|
make_evaluator.py
ADDED
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
|
3 |
+
"""Train a MiniHF evaluator model (instruction tuned LoRA)."""
|
4 |
+
|
5 |
+
import argparse
|
6 |
+
from functools import partial
|
7 |
+
import os
|
8 |
+
from pathlib import Path
|
9 |
+
import sys
|
10 |
+
|
11 |
+
os.environ["BITSANDBYTES_NOWELCOME"] = "1"
|
12 |
+
|
13 |
+
import accelerate
|
14 |
+
import datasets
|
15 |
+
import datasets.distributed
|
16 |
+
import peft
|
17 |
+
import torch
|
18 |
+
from torch import optim
|
19 |
+
from torch.nn import functional as F
|
20 |
+
from torch.utils import data
|
21 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
22 |
+
from tqdm import tqdm
|
23 |
+
|
24 |
+
print = tqdm.external_write_mode()(print)
|
25 |
+
|
26 |
+
|
27 |
+
def batch_to_tensors(batch, device="cpu"):
|
28 |
+
batch = [item["input_ids"] for item in batch]
|
29 |
+
seq_len = max(len(x) for x in batch)
|
30 |
+
input_ids = torch.zeros(len(batch), seq_len, dtype=torch.long, device=device)
|
31 |
+
attention_mask = torch.zeros(len(batch), seq_len, dtype=torch.long, device=device)
|
32 |
+
for i, x in enumerate(batch):
|
33 |
+
input_ids[i, : len(x)] = torch.tensor(x, dtype=torch.long, device=device)
|
34 |
+
attention_mask[i, : len(x)] = 1
|
35 |
+
return input_ids, attention_mask
|
36 |
+
|
37 |
+
|
38 |
+
def weighted_mean(x, w=None, dim=None, keepdim=False, dtype=None):
|
39 |
+
w = x.new_tensor(1.0) if w is None else w
|
40 |
+
w = w.expand_as(x)
|
41 |
+
dim = tuple(range(x.ndim)) if dim is None else dim
|
42 |
+
num = torch.sum(x * w, dim=dim, keepdim=keepdim, dtype=dtype)
|
43 |
+
denom = torch.sum(w, dim=dim, keepdim=keepdim, dtype=dtype)
|
44 |
+
return num / denom
|
45 |
+
|
46 |
+
|
47 |
+
class EndlessHFDataset(data.IterableDataset):
|
48 |
+
def __init__(self, dataset):
|
49 |
+
super().__init__()
|
50 |
+
self.dataset = dataset
|
51 |
+
|
52 |
+
def __iter__(self):
|
53 |
+
while True:
|
54 |
+
yield from self.dataset
|
55 |
+
self.dataset.set_epoch(self.dataset._epoch + 1)
|
56 |
+
|
57 |
+
|
58 |
+
def main():
|
59 |
+
parser = argparse.ArgumentParser(
|
60 |
+
description=__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
61 |
+
)
|
62 |
+
parser.add_argument("--batch-size", type=int, default=4, help="batch size per process")
|
63 |
+
parser.add_argument("--examples", type=int, default=100000, help="train for n examples")
|
64 |
+
parser.add_argument("--output-dir", type=Path, default="evaluator", help="output directory")
|
65 |
+
parser.add_argument("--save-every", type=int, default=10000, help="save every n examples")
|
66 |
+
args = parser.parse_args()
|
67 |
+
|
68 |
+
dataset_seed = 100
|
69 |
+
lora_rank = 32
|
70 |
+
lr = 1e-4
|
71 |
+
max_len = 2048
|
72 |
+
model_name = "openlm-research/open_llama_7b"
|
73 |
+
|
74 |
+
# Initialize Accelerate
|
75 |
+
accelerator = accelerate.Accelerator(mixed_precision="bf16", dispatch_batches=False)
|
76 |
+
device = accelerator.device
|
77 |
+
print0 = accelerator.on_local_main_process(print)
|
78 |
+
|
79 |
+
# Load tokenizer
|
80 |
+
print0(f"### Loading tokenizer: {model_name}", file=sys.stderr)
|
81 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
|
82 |
+
tokenizer.pad_token = tokenizer.eos_token
|
83 |
+
|
84 |
+
# Load model
|
85 |
+
print0(f"### Loading model: {model_name}", file=sys.stderr)
|
86 |
+
bnb_config = BitsAndBytesConfig(
|
87 |
+
load_in_4bit=True,
|
88 |
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
89 |
+
bnb_4bit_quant_type="nf4",
|
90 |
+
bnb_4bit_use_double_quant=True,
|
91 |
+
)
|
92 |
+
with accelerator.main_process_first():
|
93 |
+
model = AutoModelForCausalLM.from_pretrained(
|
94 |
+
model_name,
|
95 |
+
device_map="auto" if accelerator.num_processes == 1 else {"": device},
|
96 |
+
quantization_config=bnb_config,
|
97 |
+
torch_dtype=torch.bfloat16,
|
98 |
+
trust_remote_code=True,
|
99 |
+
)
|
100 |
+
accelerator.wait_for_everyone()
|
101 |
+
|
102 |
+
# Set up the LoRA
|
103 |
+
print0("### Setting up the LoRA", file=sys.stderr)
|
104 |
+
peft_config = peft.LoraConfig(
|
105 |
+
peft.TaskType.CAUSAL_LM,
|
106 |
+
inference_mode=False,
|
107 |
+
r=lora_rank,
|
108 |
+
lora_alpha=8,
|
109 |
+
lora_dropout=0.0,
|
110 |
+
target_modules=[
|
111 |
+
"self_attn.q_proj",
|
112 |
+
"self_attn.k_proj",
|
113 |
+
"self_attn.v_proj",
|
114 |
+
"self_attn.o_proj",
|
115 |
+
"mlp.gate_proj",
|
116 |
+
"mlp.up_proj",
|
117 |
+
"mlp.down_proj",
|
118 |
+
"lm_head",
|
119 |
+
],
|
120 |
+
)
|
121 |
+
model = peft.get_peft_model(model, peft_config)
|
122 |
+
accelerator.wait_for_everyone()
|
123 |
+
|
124 |
+
# Set up the model
|
125 |
+
model.train()
|
126 |
+
model.gradient_checkpointing_enable()
|
127 |
+
model.enable_input_require_grads()
|
128 |
+
if accelerator.is_local_main_process:
|
129 |
+
model.print_trainable_parameters()
|
130 |
+
|
131 |
+
# Dataset helper functions
|
132 |
+
def combine_flan(row):
|
133 |
+
return row["inputs"] + "<|end|>" + row["targets"] + tokenizer.eos_token
|
134 |
+
|
135 |
+
def combine_dolly(row):
|
136 |
+
return (
|
137 |
+
row["context"]
|
138 |
+
+ "\n\n"
|
139 |
+
+ row["instruction"]
|
140 |
+
+ "<|end|>"
|
141 |
+
+ row["response"]
|
142 |
+
+ tokenizer.eos_token
|
143 |
+
)
|
144 |
+
|
145 |
+
def to_tokens(combine_fn, row):
|
146 |
+
return tokenizer(combine_fn(row))
|
147 |
+
|
148 |
+
def exclude_too_long(row):
|
149 |
+
return len(row["input_ids"]) <= max_len
|
150 |
+
|
151 |
+
# Load dataset
|
152 |
+
print0("### Loading datasets", file=sys.stderr)
|
153 |
+
with accelerator.main_process_first():
|
154 |
+
dataset_1 = datasets.load_dataset("Muennighoff/flan", streaming=True)
|
155 |
+
dataset_2 = datasets.load_dataset("databricks/databricks-dolly-15k", streaming=True)
|
156 |
+
accelerator.wait_for_everyone()
|
157 |
+
dataset_1 = dataset_1["train"].map(partial(to_tokens, combine_flan))
|
158 |
+
dataset_2 = dataset_2["train"].map(partial(to_tokens, combine_dolly))
|
159 |
+
dataset = (
|
160 |
+
datasets.interleave_datasets([dataset_1, dataset_2], probabilities=[0.9, 0.1])
|
161 |
+
.filter(exclude_too_long)
|
162 |
+
.shuffle(seed=dataset_seed)
|
163 |
+
.select_columns(["input_ids"])
|
164 |
+
)
|
165 |
+
dataset = datasets.distributed.split_dataset_by_node(
|
166 |
+
dataset, accelerator.process_index, accelerator.num_processes
|
167 |
+
)
|
168 |
+
dataloader = data.DataLoader(
|
169 |
+
EndlessHFDataset(dataset),
|
170 |
+
batch_size=args.batch_size,
|
171 |
+
collate_fn=batch_to_tensors,
|
172 |
+
drop_last=True,
|
173 |
+
)
|
174 |
+
|
175 |
+
# Set up optimizer
|
176 |
+
opt = optim.Adam(model.parameters(), lr=lr, betas=(0.9, 0.99))
|
177 |
+
|
178 |
+
# Wrap objects
|
179 |
+
model, opt, dataloader = accelerator.prepare(model, opt, dataloader)
|
180 |
+
|
181 |
+
# Test max sequence length
|
182 |
+
print0("### Testing max sequence length", file=sys.stderr)
|
183 |
+
input_ids = torch.zeros([args.batch_size, max_len], dtype=torch.long, device=device)
|
184 |
+
attention_mask = torch.ones([args.batch_size, max_len], dtype=torch.long, device=device)
|
185 |
+
outputs = model(input_ids, attention_mask=attention_mask, use_cache=False)
|
186 |
+
accelerator.backward(outputs.logits.sum() * 0)
|
187 |
+
opt.zero_grad()
|
188 |
+
torch.cuda.empty_cache()
|
189 |
+
|
190 |
+
def save_model():
|
191 |
+
print0("### Saving model", file=sys.stderr)
|
192 |
+
accelerator.wait_for_everyone()
|
193 |
+
if accelerator.is_main_process:
|
194 |
+
unwrapped_model = accelerator.unwrap_model(model)
|
195 |
+
unwrapped_model.save_pretrained(args.output_dir, safe_serialization=True)
|
196 |
+
tokenizer.save_pretrained(args.output_dir)
|
197 |
+
|
198 |
+
# Train
|
199 |
+
print0("### Training", file=sys.stderr)
|
200 |
+
examples = 0
|
201 |
+
last_save = 0
|
202 |
+
pbar = tqdm(
|
203 |
+
disable=not accelerator.is_local_main_process,
|
204 |
+
total=args.examples,
|
205 |
+
unit="ex",
|
206 |
+
smoothing=0.01,
|
207 |
+
)
|
208 |
+
|
209 |
+
try:
|
210 |
+
for batch in dataloader:
|
211 |
+
input_ids, attention_mask = batch
|
212 |
+
with accelerator.accumulate(model):
|
213 |
+
# Forward pass
|
214 |
+
outputs = model(
|
215 |
+
input_ids[:, :-1],
|
216 |
+
attention_mask=attention_mask[:, :-1],
|
217 |
+
use_cache=False,
|
218 |
+
)
|
219 |
+
losses = F.cross_entropy(
|
220 |
+
outputs.logits.transpose(-1, -2),
|
221 |
+
input_ids[:, 1:],
|
222 |
+
reduction="none",
|
223 |
+
)
|
224 |
+
mask = attention_mask[:, :-1] * attention_mask[:, 1:]
|
225 |
+
loss = weighted_mean(losses, mask, dtype=torch.float32)
|
226 |
+
|
227 |
+
# Backward pass and optimizer step
|
228 |
+
accelerator.backward(loss)
|
229 |
+
opt.step()
|
230 |
+
opt.zero_grad()
|
231 |
+
|
232 |
+
global_batch_size = args.batch_size * accelerator.num_processes
|
233 |
+
examples += global_batch_size
|
234 |
+
pbar.update(global_batch_size)
|
235 |
+
|
236 |
+
global_loss = accelerator.reduce(loss, "mean")
|
237 |
+
print0(f"examples: {examples}, loss: {global_loss.item():g}")
|
238 |
+
|
239 |
+
if examples >= args.examples:
|
240 |
+
save_model()
|
241 |
+
break
|
242 |
+
|
243 |
+
if examples - last_save >= args.save_every:
|
244 |
+
save_model()
|
245 |
+
last_save += args.save_every
|
246 |
+
|
247 |
+
except KeyboardInterrupt:
|
248 |
+
pass
|
249 |
+
|
250 |
+
finally:
|
251 |
+
pbar.close()
|
252 |
+
|
253 |
+
|
254 |
+
if __name__ == "__main__":
|
255 |
+
main()
|
special_tokens_map.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<s>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": true,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "</s>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": true,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": "</s>",
|
17 |
+
"unk_token": {
|
18 |
+
"content": "<unk>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": true,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
}
|
24 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ab1b681ec7fc02fed5edd3026687d7a692a918c4dd8e150ca2e3994a6229843b
|
3 |
+
size 534194
|
tokenizer_config.json
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": true,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"bos_token": {
|
5 |
+
"__type": "AddedToken",
|
6 |
+
"content": "<s>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": true,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false
|
11 |
+
},
|
12 |
+
"clean_up_tokenization_spaces": false,
|
13 |
+
"eos_token": {
|
14 |
+
"__type": "AddedToken",
|
15 |
+
"content": "</s>",
|
16 |
+
"lstrip": false,
|
17 |
+
"normalized": true,
|
18 |
+
"rstrip": false,
|
19 |
+
"single_word": false
|
20 |
+
},
|
21 |
+
"model_max_length": 2048,
|
22 |
+
"pad_token": null,
|
23 |
+
"sp_model_kwargs": {},
|
24 |
+
"tokenizer_class": "LlamaTokenizer",
|
25 |
+
"unk_token": {
|
26 |
+
"__type": "AddedToken",
|
27 |
+
"content": "<unk>",
|
28 |
+
"lstrip": false,
|
29 |
+
"normalized": true,
|
30 |
+
"rstrip": false,
|
31 |
+
"single_word": false
|
32 |
+
}
|
33 |
+
}
|