GUE_EMP_H3K79me3-seqsight_4096_512_15M-L32
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_4096_512_15M on the mahdibaghbanzadeh/GUE_EMP_H3K79me3 dataset. It achieves the following results on the evaluation set:
- Loss: 0.5970
- F1 Score: 0.7014
- Accuracy: 0.7021
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 20000
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
---|---|---|---|---|---|
0.634 | 2.21 | 400 | 0.5905 | 0.6862 | 0.6879 |
0.5823 | 4.42 | 800 | 0.5909 | 0.6911 | 0.6924 |
0.5701 | 6.63 | 1200 | 0.5940 | 0.6886 | 0.6973 |
0.5594 | 8.84 | 1600 | 0.6040 | 0.6803 | 0.6935 |
0.5511 | 11.05 | 2000 | 0.5772 | 0.6976 | 0.6983 |
0.5418 | 13.26 | 2400 | 0.5796 | 0.7014 | 0.7011 |
0.5356 | 15.47 | 2800 | 0.5797 | 0.7085 | 0.7101 |
0.5228 | 17.68 | 3200 | 0.5762 | 0.7065 | 0.7063 |
0.5163 | 19.89 | 3600 | 0.5749 | 0.7083 | 0.7080 |
0.5051 | 22.1 | 4000 | 0.5886 | 0.7149 | 0.7150 |
0.4992 | 24.31 | 4400 | 0.5958 | 0.7178 | 0.7181 |
0.493 | 26.52 | 4800 | 0.5775 | 0.7117 | 0.7115 |
0.4843 | 28.73 | 5200 | 0.5946 | 0.7090 | 0.7105 |
0.4815 | 30.94 | 5600 | 0.6009 | 0.7057 | 0.7053 |
0.4746 | 33.15 | 6000 | 0.6112 | 0.7111 | 0.7119 |
0.4714 | 35.36 | 6400 | 0.6226 | 0.7107 | 0.7112 |
0.4654 | 37.57 | 6800 | 0.5927 | 0.7124 | 0.7122 |
0.4644 | 39.78 | 7200 | 0.6007 | 0.7119 | 0.7126 |
0.4631 | 41.99 | 7600 | 0.5986 | 0.7135 | 0.7150 |
0.454 | 44.2 | 8000 | 0.6115 | 0.7110 | 0.7108 |
0.4541 | 46.41 | 8400 | 0.6328 | 0.7012 | 0.7053 |
0.4495 | 48.62 | 8800 | 0.6189 | 0.7111 | 0.7119 |
0.4474 | 50.83 | 9200 | 0.6232 | 0.7083 | 0.7098 |
0.4405 | 53.04 | 9600 | 0.6259 | 0.7112 | 0.7108 |
0.4364 | 55.25 | 10000 | 0.6099 | 0.7081 | 0.7098 |
0.4334 | 57.46 | 10400 | 0.6327 | 0.7133 | 0.7132 |
0.4355 | 59.67 | 10800 | 0.6326 | 0.7117 | 0.7119 |
0.4308 | 61.88 | 11200 | 0.6169 | 0.7139 | 0.7150 |
0.4283 | 64.09 | 11600 | 0.6267 | 0.7123 | 0.7132 |
0.4241 | 66.3 | 12000 | 0.6166 | 0.7102 | 0.7108 |
0.4194 | 68.51 | 12400 | 0.6630 | 0.7077 | 0.7077 |
0.4189 | 70.72 | 12800 | 0.6475 | 0.7071 | 0.7077 |
0.4174 | 72.93 | 13200 | 0.6351 | 0.7100 | 0.7119 |
0.4141 | 75.14 | 13600 | 0.6461 | 0.7079 | 0.7077 |
0.4108 | 77.35 | 14000 | 0.6512 | 0.7076 | 0.7077 |
0.4054 | 79.56 | 14400 | 0.6721 | 0.7105 | 0.7101 |
0.407 | 81.77 | 14800 | 0.6266 | 0.7104 | 0.7108 |
0.4066 | 83.98 | 15200 | 0.6401 | 0.7097 | 0.7094 |
0.4019 | 86.19 | 15600 | 0.6580 | 0.7091 | 0.7087 |
0.4005 | 88.4 | 16000 | 0.6514 | 0.7118 | 0.7129 |
0.3991 | 90.61 | 16400 | 0.6499 | 0.7133 | 0.7139 |
0.4002 | 92.82 | 16800 | 0.6624 | 0.7131 | 0.7136 |
0.3961 | 95.03 | 17200 | 0.6589 | 0.7085 | 0.7101 |
0.3951 | 97.24 | 17600 | 0.6709 | 0.7119 | 0.7129 |
0.3931 | 99.45 | 18000 | 0.6598 | 0.7091 | 0.7091 |
0.3899 | 101.66 | 18400 | 0.6676 | 0.7104 | 0.7108 |
0.3907 | 103.87 | 18800 | 0.6622 | 0.7105 | 0.7108 |
0.3887 | 106.08 | 19200 | 0.6606 | 0.7141 | 0.7146 |
0.3845 | 108.29 | 19600 | 0.6660 | 0.7136 | 0.7143 |
0.3868 | 110.5 | 20000 | 0.6669 | 0.7089 | 0.7091 |
Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
- Downloads last month
- 1
Unable to determine this model’s pipeline type. Check the
docs
.