EllieS commited on
Commit
bf8ff8e
1 Parent(s): 0acdc77

Model save

Browse files
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ base_model: alignment-handbook/zephyr-7b-sft-full
9
+ model-index:
10
+ - name: zephyr-7b-dpo-lora-pubmedqa-selfgen-old
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-dpo-lora-pubmedqa-selfgen-old
18
+
19
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.0027
22
+ - Rewards/chosen: -1.9349
23
+ - Rewards/rejected: -10.4879
24
+ - Rewards/accuracies: 1.0
25
+ - Rewards/margins: 8.5530
26
+ - Logps/rejected: -1094.3392
27
+ - Logps/chosen: -260.5620
28
+ - Logits/rejected: -2.3781
29
+ - Logits/chosen: -2.6042
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
+ - train_batch_size: 2
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 2
54
+ - gradient_accumulation_steps: 2
55
+ - total_train_batch_size: 8
56
+ - total_eval_batch_size: 16
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 1
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.023 | 0.42 | 7000 | 0.0164 | -1.0835 | -8.5927 | 1.0 | 7.5092 | -904.8198 | -175.4290 | -2.8442 | -2.8776 |
67
+ | 0.0052 | 0.83 | 14000 | 0.0026 | -1.9746 | -10.4931 | 1.0 | 8.5185 | -1094.8602 | -264.5324 | -2.3838 | -2.6053 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - PEFT 0.7.1
73
+ - Transformers 4.36.2
74
+ - Pytorch 2.1.2+cu121
75
+ - Datasets 2.14.6
76
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:81eee70c6664b1ccf2e1f64419fcf49ade18b8a03cc37b1549869754613705b3
3
  size 83946192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50759cf5be7f54cff0ccffa960f2af618677be148d442b3dc438a850779b86b8
3
  size 83946192
all_results.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -2.604215621948242,
4
+ "eval_logits/rejected": -2.3780710697174072,
5
+ "eval_logps/chosen": -260.56195068359375,
6
+ "eval_logps/rejected": -1094.3392333984375,
7
+ "eval_loss": 0.0027059800922870636,
8
+ "eval_rewards/accuracies": 1.0,
9
+ "eval_rewards/chosen": -1.9348751306533813,
10
+ "eval_rewards/margins": 8.553033828735352,
11
+ "eval_rewards/rejected": -10.487908363342285,
12
+ "eval_runtime": 3.9022,
13
+ "eval_samples": 5,
14
+ "eval_samples_per_second": 1.281,
15
+ "eval_steps_per_second": 0.256,
16
+ "train_loss": 0.03956493256323719,
17
+ "train_runtime": 68990.3604,
18
+ "train_samples": 134157,
19
+ "train_samples_per_second": 1.945,
20
+ "train_steps_per_second": 0.243
21
+ }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -2.604215621948242,
4
+ "eval_logits/rejected": -2.3780710697174072,
5
+ "eval_logps/chosen": -260.56195068359375,
6
+ "eval_logps/rejected": -1094.3392333984375,
7
+ "eval_loss": 0.0027059800922870636,
8
+ "eval_rewards/accuracies": 1.0,
9
+ "eval_rewards/chosen": -1.9348751306533813,
10
+ "eval_rewards/margins": 8.553033828735352,
11
+ "eval_rewards/rejected": -10.487908363342285,
12
+ "eval_runtime": 3.9022,
13
+ "eval_samples": 5,
14
+ "eval_samples_per_second": 1.281,
15
+ "eval_steps_per_second": 0.256
16
+ }
runs/Feb20_13-41-06_586cb8b6da8c/events.out.tfevents.1708437159.586cb8b6da8c.8117.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2b67637d347b1b0f4a86716994b37116505fd7e2c77c5a653894797b7c8d8c12
3
- size 1021003
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56e93f9cc44f93b2ed9ff9ef47a6a3af1c42d116080871855b9dccb6ec92afe3
3
+ size 1070610
runs/Feb20_13-41-06_586cb8b6da8c/events.out.tfevents.1708506152.586cb8b6da8c.8117.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57917befde6ebe3ed2d8139a120c74577d5410dd793a3b77588caf1ae27cb4a6
3
+ size 841
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.03956493256323719,
4
+ "train_runtime": 68990.3604,
5
+ "train_samples": 134157,
6
+ "train_samples_per_second": 1.945,
7
+ "train_steps_per_second": 0.243
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff