Edit model card

GUE_EMP_H3K79me3-seqsight_16384_512_34M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_16384_512_34M on the mahdibaghbanzadeh/GUE_EMP_H3K79me3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4507
  • F1 Score: 0.8194
  • Accuracy: 0.8197

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5034 1.1 200 0.4527 0.8016 0.8017
0.4532 2.21 400 0.4417 0.8117 0.8124
0.4401 3.31 600 0.4337 0.8079 0.8086
0.4227 4.42 800 0.4331 0.8155 0.8162
0.4155 5.52 1000 0.4300 0.8146 0.8159
0.4046 6.63 1200 0.4488 0.8063 0.8083
0.402 7.73 1400 0.4522 0.8022 0.8051
0.3905 8.84 1600 0.4595 0.8018 0.8044
0.3818 9.94 1800 0.4344 0.8173 0.8173
0.3747 11.05 2000 0.4403 0.8127 0.8131
0.3694 12.15 2200 0.4358 0.8202 0.8211
0.3559 13.26 2400 0.4452 0.8150 0.8155
0.3534 14.36 2600 0.4384 0.8150 0.8155
0.3474 15.47 2800 0.4431 0.8185 0.8190
0.3327 16.57 3000 0.4609 0.8108 0.8107
0.3349 17.68 3200 0.4437 0.8203 0.8204
0.3188 18.78 3400 0.4701 0.8065 0.8079
0.3131 19.89 3600 0.4559 0.8171 0.8176
0.3114 20.99 3800 0.4827 0.8121 0.8124
0.3021 22.1 4000 0.4816 0.8199 0.8197
0.2955 23.2 4200 0.4813 0.8139 0.8141
0.2872 24.31 4400 0.4862 0.8123 0.8128
0.2768 25.41 4600 0.4948 0.8153 0.8152
0.2785 26.52 4800 0.5160 0.8091 0.8096
0.2734 27.62 5000 0.5076 0.8075 0.8086
0.2618 28.73 5200 0.5060 0.8116 0.8121
0.2563 29.83 5400 0.5171 0.8074 0.8076
0.2494 30.94 5600 0.5232 0.8151 0.8155
0.2449 32.04 5800 0.5446 0.8069 0.8069
0.2451 33.15 6000 0.5403 0.8110 0.8114
0.2342 34.25 6200 0.5469 0.8121 0.8121
0.2335 35.36 6400 0.5858 0.8135 0.8141
0.233 36.46 6600 0.5532 0.8067 0.8076
0.2238 37.57 6800 0.5736 0.8126 0.8128
0.2204 38.67 7000 0.5773 0.8036 0.8044
0.2164 39.78 7200 0.5784 0.8148 0.8152
0.2121 40.88 7400 0.5757 0.8088 0.8089
0.2092 41.99 7600 0.5637 0.8097 0.8096
0.2088 43.09 7800 0.5988 0.8014 0.8020
0.2005 44.2 8000 0.6101 0.8042 0.8048
0.1994 45.3 8200 0.6062 0.8106 0.8107
0.1976 46.41 8400 0.6074 0.8042 0.8044
0.1959 47.51 8600 0.6235 0.8058 0.8069
0.1972 48.62 8800 0.6036 0.8073 0.8076
0.188 49.72 9000 0.6267 0.8074 0.8079
0.1939 50.83 9200 0.6132 0.8069 0.8076
0.1887 51.93 9400 0.6256 0.8103 0.8107
0.186 53.04 9600 0.6270 0.8066 0.8069
0.1811 54.14 9800 0.6349 0.8071 0.8076
0.185 55.25 10000 0.6333 0.8061 0.8065

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.