tyzhu commited on
Commit
5765f9a
·
verified ·
1 Parent(s): 1e02792

Model save

Browse files
Files changed (1) hide show
  1. README.md +63 -77
README.md CHANGED
@@ -1,26 +1,13 @@
1
  ---
2
- license: other
3
- base_model: Qwen/Qwen1.5-4B
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - tyzhu/lmind_nq_train6000_eval6489_v1_qa
8
  metrics:
9
  - accuracy
10
  model-index:
11
  - name: lmind_nq_train6000_eval6489_v1_qa_5e-4_lora2
12
- results:
13
- - task:
14
- name: Causal Language Modeling
15
- type: text-generation
16
- dataset:
17
- name: tyzhu/lmind_nq_train6000_eval6489_v1_qa
18
- type: tyzhu/lmind_nq_train6000_eval6489_v1_qa
19
- metrics:
20
- - name: Accuracy
21
- type: accuracy
22
- value: 0.5299487179487179
23
- library_name: peft
24
  ---
25
 
26
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -28,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
28
 
29
  # lmind_nq_train6000_eval6489_v1_qa_5e-4_lora2
30
 
31
- This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_nq_train6000_eval6489_v1_qa dataset.
32
  It achieves the following results on the evaluation set:
33
- - Loss: 2.9170
34
- - Accuracy: 0.5299
35
 
36
  ## Model description
37
 
@@ -51,12 +38,12 @@ More information needed
51
 
52
  The following hyperparameters were used during training:
53
  - learning_rate: 0.0005
54
- - train_batch_size: 1
55
  - eval_batch_size: 2
56
  - seed: 42
57
  - distributed_type: multi-GPU
58
  - num_devices: 4
59
- - gradient_accumulation_steps: 8
60
  - total_train_batch_size: 32
61
  - total_eval_batch_size: 8
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -66,64 +53,63 @@ The following hyperparameters were used during training:
66
 
67
  ### Training results
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
- |:-------------:|:-------:|:----:|:---------------:|:--------:|
71
- | 1.7217 | 0.9973 | 187 | 1.6210 | 0.5739 |
72
- | 1.2605 | 2.0 | 375 | 1.6909 | 0.5707 |
73
- | 0.9141 | 2.9973 | 562 | 1.8319 | 0.5657 |
74
- | 0.7223 | 4.0 | 750 | 1.9418 | 0.5638 |
75
- | 0.6434 | 4.9973 | 937 | 2.0937 | 0.5602 |
76
- | 0.5957 | 6.0 | 1125 | 2.1787 | 0.5573 |
77
- | 0.5728 | 6.9973 | 1312 | 2.2602 | 0.558 |
78
- | 0.5596 | 8.0 | 1500 | 2.2223 | 0.5559 |
79
- | 0.5218 | 8.9973 | 1687 | 2.2797 | 0.5538 |
80
- | 0.5254 | 10.0 | 1875 | 2.2758 | 0.5561 |
81
- | 0.526 | 10.9973 | 2062 | 2.2686 | 0.5544 |
82
- | 0.5396 | 12.0 | 2250 | 2.2792 | 0.5534 |
83
- | 0.5466 | 12.9973 | 2437 | 2.2916 | 0.5501 |
84
- | 0.556 | 14.0 | 2625 | 2.3210 | 0.5497 |
85
- | 0.5501 | 14.9973 | 2812 | 2.3918 | 0.5465 |
86
- | 0.5461 | 16.0 | 3000 | 2.3702 | 0.5476 |
87
- | 0.5032 | 16.9973 | 3187 | 2.4440 | 0.5461 |
88
- | 0.501 | 18.0 | 3375 | 2.4241 | 0.5468 |
89
- | 0.4975 | 18.9973 | 3562 | 2.4591 | 0.5451 |
90
- | 0.4977 | 20.0 | 3750 | 2.4378 | 0.5450 |
91
- | 0.501 | 20.9973 | 3937 | 2.4488 | 0.5438 |
92
- | 0.5133 | 22.0 | 4125 | 2.5483 | 0.5429 |
93
- | 0.5154 | 22.9973 | 4312 | 2.4610 | 0.5442 |
94
- | 0.5257 | 24.0 | 4500 | 2.5031 | 0.5449 |
95
- | 0.5065 | 24.9973 | 4687 | 2.5107 | 0.5415 |
96
- | 0.5177 | 26.0 | 4875 | 2.5011 | 0.5402 |
97
- | 0.5115 | 26.9973 | 5062 | 2.5476 | 0.5407 |
98
- | 0.5041 | 28.0 | 5250 | 2.5786 | 0.5393 |
99
- | 0.5014 | 28.9973 | 5437 | 2.6879 | 0.5384 |
100
- | 0.5005 | 30.0 | 5625 | 2.5663 | 0.5392 |
101
- | 0.5015 | 30.9973 | 5812 | 2.6379 | 0.5392 |
102
- | 0.5061 | 32.0 | 6000 | 2.7169 | 0.5354 |
103
- | 0.4849 | 32.9973 | 6187 | 2.6270 | 0.5365 |
104
- | 0.4969 | 34.0 | 6375 | 2.6982 | 0.5352 |
105
- | 0.5034 | 34.9973 | 6562 | 2.6722 | 0.5345 |
106
- | 0.5052 | 36.0 | 6750 | 2.6832 | 0.5345 |
107
- | 0.5123 | 36.9973 | 6937 | 2.6866 | 0.5364 |
108
- | 0.5061 | 38.0 | 7125 | 2.6574 | 0.5329 |
109
- | 0.5032 | 38.9973 | 7312 | 2.6847 | 0.5367 |
110
- | 0.5051 | 40.0 | 7500 | 2.6602 | 0.536 |
111
- | 0.4813 | 40.9973 | 7687 | 2.7159 | 0.5351 |
112
- | 0.4827 | 42.0 | 7875 | 2.7482 | 0.5349 |
113
- | 0.492 | 42.9973 | 8062 | 2.7285 | 0.5345 |
114
- | 0.4973 | 44.0 | 8250 | 2.8042 | 0.5308 |
115
- | 0.5035 | 44.9973 | 8437 | 2.7272 | 0.5307 |
116
- | 0.5078 | 46.0 | 8625 | 2.7626 | 0.5310 |
117
- | 0.5043 | 46.9973 | 8812 | 2.7688 | 0.5288 |
118
- | 0.4997 | 48.0 | 9000 | 2.8357 | 0.5311 |
119
- | 0.4771 | 48.9973 | 9187 | 2.8296 | 0.5316 |
120
- | 0.4793 | 49.8667 | 9350 | 2.9170 | 0.5299 |
121
 
122
 
123
  ### Framework versions
124
 
125
- - PEFT 0.5.0
126
- - Transformers 4.41.1
127
  - Pytorch 2.1.0+cu121
128
- - Datasets 2.19.1
129
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: llama2
3
+ base_model: meta-llama/Llama-2-7b-hf
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - accuracy
8
  model-index:
9
  - name: lmind_nq_train6000_eval6489_v1_qa_5e-4_lora2
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # lmind_nq_train6000_eval6489_v1_qa_5e-4_lora2
17
 
18
+ This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 5.5751
21
+ - Accuracy: 0.3655
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0005
41
+ - train_batch_size: 2
42
  - eval_batch_size: 2
43
  - seed: 42
44
  - distributed_type: multi-GPU
45
  - num_devices: 4
46
+ - gradient_accumulation_steps: 4
47
  - total_train_batch_size: 32
48
  - total_eval_batch_size: 8
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
57
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
58
+ | 1.43 | 1.0 | 187 | 1.2683 | 0.6162 |
59
+ | 1.0285 | 2.0 | 375 | 1.3220 | 0.6129 |
60
+ | 0.7318 | 3.0 | 562 | 1.4645 | 0.6076 |
61
+ | 0.5898 | 4.0 | 750 | 1.5454 | 0.6050 |
62
+ | 0.5309 | 5.0 | 937 | 1.6439 | 0.6026 |
63
+ | 0.4985 | 6.0 | 1125 | 1.7220 | 0.6034 |
64
+ | 0.5091 | 7.0 | 1312 | 1.8008 | 0.6008 |
65
+ | 0.4796 | 8.0 | 1500 | 1.7782 | 0.6001 |
66
+ | 0.4453 | 9.0 | 1687 | 1.8255 | 0.5985 |
67
+ | 0.448 | 10.0 | 1875 | 1.7979 | 0.5931 |
68
+ | 0.4522 | 11.0 | 2062 | 1.8272 | 0.5959 |
69
+ | 0.4552 | 12.0 | 2250 | 1.8670 | 0.5946 |
70
+ | 0.4551 | 13.0 | 2437 | 1.8706 | 0.5950 |
71
+ | 0.4559 | 14.0 | 2625 | 1.8731 | 0.5925 |
72
+ | 0.4581 | 15.0 | 2812 | 1.8531 | 0.5932 |
73
+ | 0.4535 | 16.0 | 3000 | 1.9492 | 0.5923 |
74
+ | 0.4308 | 17.0 | 3187 | 1.8944 | 0.5915 |
75
+ | 0.4312 | 18.0 | 3375 | 1.9315 | 0.5904 |
76
+ | 0.4372 | 19.0 | 3562 | 1.9201 | 0.5899 |
77
+ | 0.4359 | 20.0 | 3750 | 1.9753 | 0.5895 |
78
+ | 0.4363 | 21.0 | 3937 | 1.9932 | 0.5877 |
79
+ | 0.4404 | 22.0 | 4125 | 2.0326 | 0.5866 |
80
+ | 0.4436 | 23.0 | 4312 | 2.0008 | 0.5848 |
81
+ | 0.4438 | 24.0 | 4500 | 2.0186 | 0.5877 |
82
+ | 0.4233 | 25.0 | 4687 | 2.0452 | 0.5863 |
83
+ | 0.4237 | 26.0 | 4875 | 2.0520 | 0.5843 |
84
+ | 0.4289 | 27.0 | 5062 | 2.0817 | 0.5828 |
85
+ | 0.4325 | 28.0 | 5250 | 2.0512 | 0.5833 |
86
+ | 0.4329 | 29.0 | 5437 | 2.0906 | 0.5828 |
87
+ | 0.4314 | 30.0 | 5625 | 2.0403 | 0.5824 |
88
+ | 0.431 | 31.0 | 5812 | 2.1194 | 0.5824 |
89
+ | 0.4318 | 32.0 | 6000 | 2.0985 | 0.5829 |
90
+ | 0.414 | 33.0 | 6187 | 2.1533 | 0.5805 |
91
+ | 0.4214 | 34.0 | 6375 | 2.1918 | 0.5779 |
92
+ | 0.4264 | 35.0 | 6562 | 2.1835 | 0.5774 |
93
+ | 0.4361 | 36.0 | 6750 | 2.1864 | 0.5771 |
94
+ | 0.4369 | 37.0 | 6937 | 2.1546 | 0.5761 |
95
+ | 0.4362 | 38.0 | 7125 | 2.1423 | 0.5752 |
96
+ | 0.4322 | 39.0 | 7312 | 2.1938 | 0.5778 |
97
+ | 0.4359 | 40.0 | 7500 | 2.2000 | 0.5752 |
98
+ | 0.4153 | 41.0 | 7687 | 2.2344 | 0.5751 |
99
+ | 0.4195 | 42.0 | 7875 | 2.2526 | 0.5747 |
100
+ | 0.9164 | 43.0 | 8062 | 2.1985 | 0.5717 |
101
+ | 0.4295 | 44.0 | 8250 | 2.2145 | 0.5718 |
102
+ | 0.4298 | 45.0 | 8437 | 2.2211 | 0.5714 |
103
+ | 0.4446 | 46.0 | 8625 | 2.2656 | 0.5703 |
104
+ | 2.0935 | 47.0 | 8812 | 2.6962 | 0.5081 |
105
+ | 3.096 | 48.0 | 9000 | 3.2961 | 0.4494 |
106
+ | 2.9615 | 49.0 | 9187 | 4.3483 | 0.4241 |
107
+ | 4.5736 | 49.87 | 9350 | 5.5751 | 0.3655 |
108
 
109
 
110
  ### Framework versions
111
 
112
+ - Transformers 4.34.0
 
113
  - Pytorch 2.1.0+cu121
114
+ - Datasets 2.18.0
115
+ - Tokenizers 0.14.1