AlekseyKorshuk commited on
Commit
776fb1c
1 Parent(s): 8e62039

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -66
README.md CHANGED
@@ -2,34 +2,22 @@
2
  license: other
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - AlekseyKorshuk/dalio-handwritten-io
7
  metrics:
8
  - accuracy
9
  model-index:
10
- - name: test-clm
11
- results:
12
- - task:
13
- name: Causal Language Modeling
14
- type: text-generation
15
- dataset:
16
- name: AlekseyKorshuk/dalio-handwritten-io
17
- type: AlekseyKorshuk/dalio-handwritten-io
18
- metrics:
19
- - name: Accuracy
20
- type: accuracy
21
- value: 0.06414321574844761
22
  ---
23
 
24
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
25
  should probably proofread and complete it, then remove this comment. -->
26
 
27
- # test-clm
28
 
29
- This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-handwritten-io dataset.
30
  It achieves the following results on the evaluation set:
31
- - Loss: 2.5547
32
- - Accuracy: 0.0641
33
 
34
  ## Model description
35
 
@@ -58,7 +46,7 @@ The following hyperparameters were used during training:
58
  - total_eval_batch_size: 16
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: cosine
61
- - num_epochs: 5.0
62
 
63
  ### Training results
64
 
@@ -67,53 +55,33 @@ The following hyperparameters were used during training:
67
  | 2.9219 | 0.1 | 1 | 2.6484 | 0.0529 |
68
  | 2.6938 | 0.2 | 2 | 2.6484 | 0.0529 |
69
  | 2.6365 | 0.3 | 3 | 2.5508 | 0.0560 |
70
- | 2.5088 | 0.4 | 4 | 2.5332 | 0.0563 |
71
- | 2.7297 | 0.5 | 5 | 2.5176 | 0.0567 |
72
- | 2.9702 | 0.6 | 6 | 2.4941 | 0.0572 |
73
- | 2.729 | 0.7 | 7 | 2.4883 | 0.0568 |
74
- | 2.6172 | 0.8 | 8 | 2.4785 | 0.0578 |
75
- | 2.6428 | 0.9 | 9 | 2.4590 | 0.0581 |
76
- | 2.5681 | 1.0 | 10 | 2.4355 | 0.0590 |
77
- | 2.1885 | 1.1 | 11 | 2.4238 | 0.0587 |
78
- | 1.981 | 1.2 | 12 | 2.4219 | 0.0587 |
79
- | 1.8673 | 1.3 | 13 | 2.4180 | 0.0591 |
80
- | 1.7321 | 1.4 | 14 | 2.4180 | 0.0596 |
81
- | 1.6355 | 1.5 | 15 | 2.4180 | 0.0601 |
82
- | 1.7758 | 1.6 | 16 | 2.4199 | 0.0602 |
83
- | 2.0162 | 1.7 | 17 | 2.4082 | 0.0605 |
84
- | 1.8037 | 1.8 | 18 | 2.3965 | 0.0605 |
85
- | 1.7204 | 1.9 | 19 | 2.375 | 0.0608 |
86
- | 1.7831 | 2.0 | 20 | 2.3574 | 0.0609 |
87
- | 1.299 | 2.1 | 21 | 2.3496 | 0.0616 |
88
- | 1.4463 | 2.2 | 22 | 2.3496 | 0.0620 |
89
- | 1.1733 | 2.3 | 23 | 2.3652 | 0.0617 |
90
- | 1.1142 | 2.4 | 24 | 2.3887 | 0.0626 |
91
- | 1.3107 | 2.5 | 25 | 2.4219 | 0.0627 |
92
- | 1.011 | 2.6 | 26 | 2.4551 | 0.0622 |
93
- | 1.3403 | 2.7 | 27 | 2.4766 | 0.0616 |
94
- | 1.3108 | 2.8 | 28 | 2.4766 | 0.0616 |
95
- | 1.0076 | 2.9 | 29 | 2.4609 | 0.0619 |
96
- | 0.8656 | 3.0 | 30 | 2.4512 | 0.0624 |
97
- | 0.6635 | 3.1 | 31 | 2.4512 | 0.0628 |
98
- | 0.9996 | 3.2 | 32 | 2.4434 | 0.0635 |
99
- | 0.9029 | 3.3 | 33 | 2.4473 | 0.0637 |
100
- | 0.8329 | 3.4 | 34 | 2.4551 | 0.0637 |
101
- | 0.8012 | 3.5 | 35 | 2.4648 | 0.0639 |
102
- | 0.5814 | 3.6 | 36 | 2.4902 | 0.0640 |
103
- | 1.0688 | 3.7 | 37 | 2.5098 | 0.0638 |
104
- | 0.8688 | 3.8 | 38 | 2.5176 | 0.0635 |
105
- | 0.7341 | 3.9 | 39 | 2.5195 | 0.0638 |
106
- | 0.7102 | 4.0 | 40 | 2.5195 | 0.0640 |
107
- | 0.7079 | 4.1 | 41 | 2.5195 | 0.0641 |
108
- | 0.7656 | 4.2 | 42 | 2.5195 | 0.0643 |
109
- | 0.6377 | 4.3 | 43 | 2.5273 | 0.0645 |
110
- | 0.5898 | 4.4 | 44 | 2.5352 | 0.0641 |
111
- | 0.5958 | 4.5 | 45 | 2.5430 | 0.0641 |
112
- | 0.7048 | 4.6 | 46 | 2.5488 | 0.0640 |
113
- | 0.5435 | 4.7 | 47 | 2.5527 | 0.0641 |
114
- | 0.4769 | 4.8 | 48 | 2.5527 | 0.0640 |
115
- | 0.6583 | 4.9 | 49 | 2.5547 | 0.0642 |
116
- | 0.7168 | 5.0 | 50 | 2.5547 | 0.0641 |
117
 
118
 
119
  ### Framework versions
 
2
  license: other
3
  tags:
4
  - generated_from_trainer
 
 
5
  metrics:
6
  - accuracy
7
  model-index:
8
+ - name: dalio-handwritten-io-1.3b
9
+ results: []
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # dalio-handwritten-io-1.3b
16
 
17
+ This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 2.3789
20
+ - Accuracy: 0.0614
21
 
22
  ## Model description
23
 
 
46
  - total_eval_batch_size: 16
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: cosine
49
+ - num_epochs: 3.0
50
 
51
  ### Training results
52
 
 
55
  | 2.9219 | 0.1 | 1 | 2.6484 | 0.0529 |
56
  | 2.6938 | 0.2 | 2 | 2.6484 | 0.0529 |
57
  | 2.6365 | 0.3 | 3 | 2.5508 | 0.0560 |
58
+ | 2.5088 | 0.4 | 4 | 2.5332 | 0.0562 |
59
+ | 2.7307 | 0.5 | 5 | 2.5176 | 0.0565 |
60
+ | 2.969 | 0.6 | 6 | 2.4941 | 0.0571 |
61
+ | 2.7283 | 0.7 | 7 | 2.4883 | 0.0567 |
62
+ | 2.6157 | 0.8 | 8 | 2.4766 | 0.0578 |
63
+ | 2.6406 | 0.9 | 9 | 2.4590 | 0.0583 |
64
+ | 2.5701 | 1.0 | 10 | 2.4375 | 0.0587 |
65
+ | 2.2017 | 1.1 | 11 | 2.4238 | 0.0587 |
66
+ | 2.0039 | 1.2 | 12 | 2.4219 | 0.0586 |
67
+ | 1.8981 | 1.3 | 13 | 2.4160 | 0.0589 |
68
+ | 1.7683 | 1.4 | 14 | 2.4160 | 0.0595 |
69
+ | 1.6746 | 1.5 | 15 | 2.4121 | 0.0600 |
70
+ | 1.8051 | 1.6 | 16 | 2.4102 | 0.0600 |
71
+ | 2.0457 | 1.7 | 17 | 2.4043 | 0.0602 |
72
+ | 1.8257 | 1.8 | 18 | 2.4004 | 0.0606 |
73
+ | 1.744 | 1.9 | 19 | 2.3887 | 0.0607 |
74
+ | 1.8232 | 2.0 | 20 | 2.3887 | 0.0607 |
75
+ | 1.4741 | 2.1 | 21 | 2.3828 | 0.0610 |
76
+ | 1.651 | 2.2 | 22 | 2.3770 | 0.0608 |
77
+ | 1.3732 | 2.3 | 23 | 2.3730 | 0.0610 |
78
+ | 1.3151 | 2.4 | 24 | 2.3730 | 0.0610 |
79
+ | 1.5302 | 2.5 | 25 | 2.3730 | 0.0610 |
80
+ | 1.2539 | 2.6 | 26 | 2.375 | 0.0612 |
81
+ | 1.6211 | 2.7 | 27 | 2.3770 | 0.0612 |
82
+ | 1.6047 | 2.8 | 28 | 2.3770 | 0.0613 |
83
+ | 1.1953 | 2.9 | 29 | 2.3789 | 0.0614 |
84
+ | 1.1621 | 3.0 | 30 | 2.3789 | 0.0614 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
 
87
  ### Framework versions