Edit model card

GUE_EMP_H3K79me3-seqsight_16384_512_56M-L1_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_16384_512_56M on the mahdibaghbanzadeh/GUE_EMP_H3K79me3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4296
  • F1 Score: 0.8218
  • Accuracy: 0.8225

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.4925 1.1 200 0.4580 0.8028 0.8027
0.4557 2.21 400 0.4510 0.8060 0.8072
0.4499 3.31 600 0.4464 0.8042 0.8058
0.4395 4.42 800 0.4424 0.8066 0.8079
0.4391 5.52 1000 0.4490 0.8007 0.8027
0.4291 6.63 1200 0.4541 0.7968 0.7992
0.4313 7.73 1400 0.4366 0.8060 0.8072
0.4228 8.84 1600 0.4589 0.7947 0.7979
0.4228 9.94 1800 0.4297 0.8143 0.8145
0.4193 11.05 2000 0.4448 0.8044 0.8058
0.4188 12.15 2200 0.4314 0.8130 0.8135
0.4139 13.26 2400 0.4306 0.8092 0.8100
0.415 14.36 2600 0.4272 0.8132 0.8138
0.4126 15.47 2800 0.4396 0.8075 0.8089
0.4105 16.57 3000 0.4327 0.8148 0.8148
0.4098 17.68 3200 0.4307 0.8124 0.8131
0.405 18.78 3400 0.4389 0.8098 0.8110
0.4054 19.89 3600 0.4358 0.8099 0.8110
0.4054 20.99 3800 0.4408 0.8114 0.8124
0.4032 22.1 4000 0.4319 0.8084 0.8096
0.4011 23.2 4200 0.4315 0.8134 0.8141
0.4006 24.31 4400 0.4423 0.8098 0.8114
0.3961 25.41 4600 0.4382 0.8149 0.8159
0.4012 26.52 4800 0.4318 0.8161 0.8169
0.4009 27.62 5000 0.4319 0.8166 0.8176
0.3955 28.73 5200 0.4295 0.8145 0.8155
0.3934 29.83 5400 0.4325 0.8141 0.8148
0.3945 30.94 5600 0.4320 0.8162 0.8169
0.3929 32.04 5800 0.4342 0.8157 0.8162
0.3925 33.15 6000 0.4293 0.8156 0.8166
0.3931 34.25 6200 0.4330 0.8134 0.8141
0.3883 35.36 6400 0.4372 0.8167 0.8176
0.3917 36.46 6600 0.4272 0.8188 0.8193
0.3895 37.57 6800 0.4318 0.8156 0.8166
0.3889 38.67 7000 0.4313 0.8174 0.8183
0.385 39.78 7200 0.4342 0.8164 0.8173
0.3904 40.88 7400 0.4298 0.8154 0.8159
0.3863 41.99 7600 0.4323 0.8161 0.8169
0.3862 43.09 7800 0.4362 0.8164 0.8173
0.3872 44.2 8000 0.4349 0.8151 0.8162
0.3857 45.3 8200 0.4290 0.8170 0.8176
0.382 46.41 8400 0.4305 0.8174 0.8180
0.3883 47.51 8600 0.4331 0.8169 0.8180
0.3808 48.62 8800 0.4348 0.8162 0.8173
0.3836 49.72 9000 0.4346 0.8162 0.8173
0.385 50.83 9200 0.4380 0.8141 0.8155
0.3831 51.93 9400 0.4341 0.8155 0.8166
0.3824 53.04 9600 0.4324 0.8171 0.8180
0.3803 54.14 9800 0.4326 0.8161 0.8169
0.382 55.25 10000 0.4344 0.8159 0.8169

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.