ghazikhanihamed commited on
Commit
7b63d40
·
1 Parent(s): 71d251f

PLM-Secondary-Structure-Generation

Browse files
Files changed (3) hide show
  1. README.md +25 -2
  2. pytorch_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -14,6 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
  # TooT-PLM-P2S
15
 
16
  This model is a fine-tuned version of [ElnaggarLab/ankh-base](https://huggingface.co/ElnaggarLab/ankh-base) on the None dataset.
 
 
 
17
 
18
  ## Model description
19
 
@@ -32,16 +35,36 @@ More information needed
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
- - learning_rate: 0.0001933359768115612
36
  - train_batch_size: 1
37
  - eval_batch_size: 8
38
  - seed: 42
 
 
39
  - gradient_accumulation_steps: 4
40
- - total_train_batch_size: 4
 
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
 
43
  - num_epochs: 10
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ### Framework versions
46
 
47
  - Transformers 4.34.1
 
14
  # TooT-PLM-P2S
15
 
16
  This model is a fine-tuned version of [ElnaggarLab/ankh-base](https://huggingface.co/ElnaggarLab/ankh-base) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.1451
19
+ - Q3 Accuracy: 0.7122
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 0.0003
39
  - train_batch_size: 1
40
  - eval_batch_size: 8
41
  - seed: 42
42
+ - distributed_type: multi-GPU
43
+ - num_devices: 6
44
  - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 24
46
+ - total_eval_batch_size: 48
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_ratio: 0.2
50
  - num_epochs: 10
51
 
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Q3 Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:-----------:|
56
+ | 0.2036 | 1.0 | 449 | 0.1943 | 0.5833 |
57
+ | 0.1686 | 2.0 | 899 | 0.1864 | 0.5688 |
58
+ | 0.1597 | 3.0 | 1349 | 0.1770 | 0.5774 |
59
+ | 0.159 | 4.0 | 1799 | 0.1740 | 0.6245 |
60
+ | 0.1503 | 5.0 | 2248 | 0.1731 | 0.6851 |
61
+ | 0.1479 | 6.0 | 2698 | 0.1670 | 0.5961 |
62
+ | 0.1447 | 7.0 | 3148 | 0.1617 | 0.5936 |
63
+ | 0.1395 | 8.0 | 3598 | 0.1550 | 0.6307 |
64
+ | 0.1298 | 9.0 | 4047 | 0.1481 | 0.5573 |
65
+ | 0.1187 | 9.98 | 4490 | 0.1451 | 0.7122 |
66
+
67
+
68
  ### Framework versions
69
 
70
  - Transformers 4.34.1
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8862e9144ac3bfa52787231c8719bc029c43f09204b59a176cb5752c72029658
3
  size 2946083298
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cfb5fab5b7f542d02f530648f7bdf61d04a06ddf7da1a8120d7cd389d1a9bba
3
  size 2946083298
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8dbaf5c6bd95e42b3b7821f5b9f8748f67abeccc93b5f84a0afd2d349a2d7448
3
  size 4600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:542d69ffedbf0cb3e00f9827d9caf865d1297630f202c7e41c2450cc22ace52f
3
  size 4600