lole25 commited on
Commit
a755005
1 Parent(s): cdae861

Model save

Browse files
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ tags:
4
+ - trl
5
+ - dpo
6
+ - generated_from_trainer
7
+ base_model: DUAL-GPO-2/zephyr-7b-irepo-new-i0
8
+ model-index:
9
+ - name: zephyr-7b-gpo-v14-i1
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # zephyr-7b-gpo-v14-i1
17
+
18
+ This model is a fine-tuned version of [DUAL-GPO-2/zephyr-7b-irepo-new-i0](https://huggingface.co/DUAL-GPO-2/zephyr-7b-irepo-new-i0) on the None dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-06
38
+ - train_batch_size: 2
39
+ - eval_batch_size: 2
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - gradient_accumulation_steps: 2
43
+ - total_train_batch_size: 4
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: cosine
46
+ - lr_scheduler_warmup_ratio: 0.1
47
+ - num_epochs: 1
48
+
49
+ ### Training results
50
+
51
+
52
+
53
+ ### Framework versions
54
+
55
+ - PEFT 0.7.1
56
+ - Transformers 4.36.2
57
+ - Pytorch 2.1.2+cu121
58
+ - Datasets 2.14.6
59
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:200d76a04b8b05a2cc7ac0c8060f2a1728e93088b2eca8367575b51cbac00746
3
  size 671150064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e76b2e259274d20b1b63f97c863473aff31029fdd23f1ad8d71af8e5c99d7354
3
  size 671150064
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.14500870579197292,
4
+ "train_runtime": 22293.6001,
5
+ "train_samples": 21000,
6
+ "train_samples_per_second": 0.942,
7
+ "train_steps_per_second": 0.235
8
+ }
runs/May12_11-19-46_gpu4-119-5/events.out.tfevents.1715477482.gpu4-119-5.2337277.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:81b1e3cf2c412d6e263b33b3d0293f0d3e9d9b4b28b7e32796cb4f9b4862cac2
3
- size 334744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fd6f75773c5f4a11a18a2b19ebb47934c2bad466591992d0c3a4ae67a598b98
3
+ size 338268
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.14500870579197292,
4
+ "train_runtime": 22293.6001,
5
+ "train_samples": 21000,
6
+ "train_samples_per_second": 0.942,
7
+ "train_steps_per_second": 0.235
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff