jazzson commited on
Commit
eac086b
·
verified ·
1 Parent(s): 292e949

End of training

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
3
+ library_name: peft
4
+ license: gemma
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: adl-hw3-finetune-gemma-2-chinese-kyara-2
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # adl-hw3-finetune-gemma-2-chinese-kyara-2
16
+
17
+ This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.9932
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0001
39
+ - train_batch_size: 8
40
+ - eval_batch_size: 8
41
+ - seed: 42
42
+ - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 64
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 3
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:------:|:----:|:---------------:|
52
+ | 2.3974 | 0.1778 | 25 | 2.1847 |
53
+ | 2.1716 | 0.3556 | 50 | 2.0972 |
54
+ | 2.1123 | 0.5333 | 75 | 2.0470 |
55
+ | 2.0123 | 0.7111 | 100 | 2.0131 |
56
+ | 1.9901 | 0.8889 | 125 | 1.9926 |
57
+ | 1.9153 | 1.0667 | 150 | 1.9910 |
58
+ | 1.7569 | 1.2444 | 175 | 1.9843 |
59
+ | 1.7971 | 1.4222 | 200 | 1.9748 |
60
+ | 1.8106 | 1.6 | 225 | 1.9597 |
61
+ | 1.7733 | 1.7778 | 250 | 1.9526 |
62
+ | 1.7275 | 1.9556 | 275 | 1.9500 |
63
+ | 1.6153 | 2.1333 | 300 | 1.9988 |
64
+ | 1.5536 | 2.3111 | 325 | 1.9955 |
65
+ | 1.5153 | 2.4889 | 350 | 1.9992 |
66
+ | 1.5445 | 2.6667 | 375 | 1.9893 |
67
+ | 1.544 | 2.8444 | 400 | 1.9932 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - PEFT 0.13.2
73
+ - Transformers 4.45.1
74
+ - Pytorch 2.5.0+cu121
75
+ - Datasets 3.1.0
76
+ - Tokenizers 0.20.2