Edit model card

GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_16384_512_56M on the mahdibaghbanzadeh/GUE_EMP_H3K79me3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4267
  • F1 Score: 0.8228
  • Accuracy: 0.8232

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.4764 1.1 200 0.4380 0.8112 0.8114
0.4381 2.21 400 0.4305 0.8106 0.8117
0.4268 3.31 600 0.4299 0.8055 0.8065
0.4129 4.42 800 0.4419 0.8070 0.8093
0.4092 5.52 1000 0.4268 0.8149 0.8166
0.3941 6.63 1200 0.4522 0.8068 0.8096
0.3919 7.73 1400 0.4270 0.8182 0.8197
0.3788 8.84 1600 0.4612 0.8045 0.8079
0.3739 9.94 1800 0.4191 0.8281 0.8287
0.3658 11.05 2000 0.4359 0.8158 0.8159
0.3602 12.15 2200 0.4162 0.8307 0.8311
0.3471 13.26 2400 0.4247 0.8229 0.8239
0.3454 14.36 2600 0.4207 0.8289 0.8291
0.3342 15.47 2800 0.4371 0.8172 0.8180
0.3245 16.57 3000 0.4329 0.8222 0.8221
0.3179 17.68 3200 0.4430 0.8146 0.8152
0.3075 18.78 3400 0.4965 0.7971 0.8003
0.3012 19.89 3600 0.4450 0.8216 0.8225
0.2906 20.99 3800 0.4661 0.8151 0.8162
0.2801 22.1 4000 0.4618 0.8218 0.8218
0.2748 23.2 4200 0.4734 0.8115 0.8124
0.2642 24.31 4400 0.5041 0.8032 0.8044
0.2551 25.41 4600 0.5074 0.8081 0.8089
0.2536 26.52 4800 0.5061 0.7931 0.7947
0.2485 27.62 5000 0.5218 0.8000 0.8020
0.2397 28.73 5200 0.4901 0.8071 0.8083
0.2293 29.83 5400 0.5268 0.7981 0.7992
0.2272 30.94 5600 0.5205 0.8129 0.8131
0.218 32.04 5800 0.5089 0.8119 0.8121
0.2167 33.15 6000 0.5431 0.8035 0.8044
0.2099 34.25 6200 0.5419 0.8113 0.8114
0.2042 35.36 6400 0.5599 0.8094 0.8100
0.2014 36.46 6600 0.5510 0.8078 0.8086
0.1992 37.57 6800 0.5469 0.8102 0.8107
0.1888 38.67 7000 0.5835 0.8086 0.8096
0.188 39.78 7200 0.5681 0.8132 0.8141
0.1853 40.88 7400 0.5798 0.8029 0.8037
0.1798 41.99 7600 0.5693 0.8074 0.8086
0.1779 43.09 7800 0.5952 0.8127 0.8135
0.1745 44.2 8000 0.5988 0.8070 0.8076
0.171 45.3 8200 0.5874 0.8056 0.8062
0.1648 46.41 8400 0.6126 0.8043 0.8055
0.1695 47.51 8600 0.6173 0.8072 0.8083
0.1622 48.62 8800 0.6059 0.8049 0.8055
0.1594 49.72 9000 0.6308 0.8064 0.8076
0.1633 50.83 9200 0.6171 0.8004 0.8017
0.1542 51.93 9400 0.6232 0.8114 0.8121
0.1529 53.04 9600 0.6267 0.8081 0.8089
0.1544 54.14 9800 0.6244 0.8083 0.8089
0.1524 55.25 10000 0.6277 0.8082 0.8089

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.