DrishtiSharma commited on
Commit
f3fb722
1 Parent(s): b69d0f7

End of training

Browse files
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: google/gemma-7b-it
9
+ model-index:
10
+ - name: gemma-7b-it-dolly-15k-japanese-brainstorming-ipo
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # gemma-7b-it-dolly-15k-japanese-brainstorming-ipo
18
+
19
+ This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 2.4864
22
+ - Rouge Scores: {'rouge1': 0.8349379511354852, 'rouge2': 0.7274477012465996, 'rougeL': 0.8017731965466872, 'rougeLsum': 0.8347274626949961}
23
+ - Bleu Scores: [0.8344549267092076, 0.7702670675251023, 0.7108107057859971, 0.6501643333897235]
24
+ - Gen Len: 241.8588
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 0.0002
44
+ - train_batch_size: 4
45
+ - eval_batch_size: 4
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - num_epochs: 3
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
54
+ |:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:--------:|
55
+ | 2.6263 | 1.0 | 398 | 2.3289 | {'rouge1': 0.8429091284800581, 'rouge2': 0.6936436106699162, 'rougeL': 0.8148324021885509, 'rougeLsum': 0.8436115528203997} | [0.8182599825705432, 0.7573989130361584, 0.6993616529579798, 0.6397083137823109] | 241.8588 |
56
+ | 1.4454 | 2.0 | 796 | 2.2673 | {'rouge1': 0.8676191825821662, 'rouge2': 0.7714826591897748, 'rougeL': 0.8384600375456672, 'rougeLsum': 0.8676504211460437} | [0.838231259049653, 0.7749860292910674, 0.715811357776676, 0.6553787760578684] | 241.8588 |
57
+ | 0.6829 | 3.0 | 1194 | 2.4864 | {'rouge1': 0.8349379511354852, 'rouge2': 0.7274477012465996, 'rougeL': 0.8017731965466872, 'rougeLsum': 0.8347274626949961} | [0.8344549267092076, 0.7702670675251023, 0.7108107057859971, 0.6501643333897235] | 241.8588 |
58
+
59
+
60
+ ### Framework versions
61
+
62
+ - PEFT 0.8.2
63
+ - Transformers 4.39.0.dev0
64
+ - Pytorch 2.1.0+cu121
65
+ - Datasets 2.17.2.dev0
66
+ - Tokenizers 0.15.2
runs/Feb24_14-52-58_dbc3db1324c3/events.out.tfevents.1708786430.dbc3db1324c3.4675.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a0585ce3a2a72b0e925efb2f6eae94c1a8aeb6dea8603b07bb5ba8be2a0cfa3
3
- size 6730
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b49a30f5390ad42d0594f18b12db7a55b555314a73ad22b0dc38e48865c9708f
3
+ size 7084