Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,57 @@
|
|
1 |
-
---
|
2 |
-
license: other
|
3 |
-
license_name: llama-3
|
4 |
-
license_link: https://llama.meta.com/llama3/license/
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: llama-3
|
4 |
+
license_link: https://llama.meta.com/llama3/license/
|
5 |
+
tags:
|
6 |
+
- llama-3
|
7 |
+
- llama
|
8 |
+
- '3'
|
9 |
+
- 5B
|
10 |
+
---
|
11 |
+
This is just an experiment similar to that done on [chargoddard/llama3-42b-v0](https://huggingface.co/chargoddard/llama3-42b-v0). The post-pruning was fine-tuned or "healed" with QLoRA using the code DPO dataset [AlekseyKorshuk/evol-codealpaca-v1-dpo](https://huggingface.co/datasets/AlekseyKorshuk/evol-codealpaca-v1-dpo). Due to limitations, this was only trained on 3150/4935 (~64%) steps of the data. I had to restart the training about halfway through, so the logs are split in two.
|
12 |
+
|
13 |
+
Loss: ~1.2
|
14 |
+
<img src="https://i.imgur.com/AnuMlv7.png">
|
15 |
+
|
16 |
+
<img src="https://i.imgur.com/kHXnKCU.png">
|
17 |
+
|
18 |
+
<img src="https://i.imgur.com/aHKVgqT.png">
|
19 |
+
|
20 |
+
<img src="https://i.imgur.com/KTLYnjl.png">
|
21 |
+
|
22 |
+
mergekit.yaml
|
23 |
+
```
|
24 |
+
slices:
|
25 |
+
- sources:
|
26 |
+
- model: ./Meta-Llama-3-8B-Instruct/
|
27 |
+
layer_range: [0,15]
|
28 |
+
- sources:
|
29 |
+
- model: ./Meta-Llama-3-8B-Instruct/
|
30 |
+
layer_range: [29,32]
|
31 |
+
|
32 |
+
merge_method: passthrough
|
33 |
+
dtype: bfloat16
|
34 |
+
```
|
35 |
+
|
36 |
+
ORPOConfig
|
37 |
+
```
|
38 |
+
learning_rate=5e-5,
|
39 |
+
lr_scheduler_type="cosine",
|
40 |
+
max_length=1024,
|
41 |
+
max_prompt_length=512,
|
42 |
+
overwrite_output_dir=False,
|
43 |
+
beta=0.1,
|
44 |
+
per_device_train_batch_size=2,
|
45 |
+
per_device_eval_batch_size=2,
|
46 |
+
gradient_accumulation_steps=4,
|
47 |
+
optim="paged_adamw_8bit",
|
48 |
+
num_train_epochs=1,
|
49 |
+
evaluation_strategy="steps",
|
50 |
+
eval_steps=0.02,
|
51 |
+
logging_steps=1,
|
52 |
+
warmup_steps=50,
|
53 |
+
report_to="wandb",
|
54 |
+
output_dir=out_dir_folder,
|
55 |
+
fp16=True,
|
56 |
+
save_steps=50
|
57 |
+
```
|