Edit model card

GUE_EMP_H3K9ac-seqsight_4096_512_46M-L1_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_4096_512_46M on the mahdibaghbanzadeh/GUE_EMP_H3K9ac dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4618
  • F1 Score: 0.8001
  • Accuracy: 0.7996

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5729 1.15 200 0.5451 0.7280 0.7290
0.5306 2.3 400 0.5608 0.7185 0.7200
0.512 3.45 600 0.5266 0.7362 0.7359
0.5029 4.6 800 0.5154 0.7436 0.7431
0.4963 5.75 1000 0.5082 0.7440 0.7445
0.4907 6.9 1200 0.5116 0.7515 0.7510
0.4837 8.05 1400 0.5103 0.7524 0.7521
0.48 9.2 1600 0.5221 0.7459 0.7463
0.4729 10.34 1800 0.5101 0.7541 0.7539
0.4742 11.49 2000 0.5007 0.7596 0.7596
0.4669 12.64 2200 0.5137 0.7549 0.7546
0.4675 13.79 2400 0.4950 0.7656 0.7654
0.4648 14.94 2600 0.4951 0.7651 0.7647
0.4611 16.09 2800 0.5000 0.7629 0.7625
0.4573 17.24 3000 0.5075 0.7616 0.7611
0.4572 18.39 3200 0.5053 0.7625 0.7621
0.4581 19.54 3400 0.4920 0.7652 0.7647
0.4508 20.69 3600 0.4946 0.7632 0.7632
0.4475 21.84 3800 0.4949 0.7641 0.7639
0.4479 22.99 4000 0.4966 0.7630 0.7629
0.4468 24.14 4200 0.4915 0.7658 0.7657
0.4375 25.29 4400 0.5056 0.7644 0.7639
0.4442 26.44 4600 0.4948 0.7619 0.7614
0.4416 27.59 4800 0.5015 0.7672 0.7668
0.4381 28.74 5000 0.4962 0.7631 0.7629
0.4409 29.89 5200 0.4953 0.7659 0.7654
0.4345 31.03 5400 0.4977 0.7658 0.7654
0.4345 32.18 5600 0.4902 0.7655 0.7654
0.4294 33.33 5800 0.5008 0.7656 0.7654
0.4378 34.48 6000 0.4893 0.7666 0.7661
0.4267 35.63 6200 0.4947 0.7699 0.7697
0.434 36.78 6400 0.4922 0.7652 0.7647
0.4283 37.93 6600 0.5046 0.7654 0.7650
0.4271 39.08 6800 0.4893 0.7691 0.7686
0.4252 40.23 7000 0.4951 0.7623 0.7618
0.4233 41.38 7200 0.4940 0.7655 0.7650
0.425 42.53 7400 0.4938 0.7687 0.7683
0.426 43.68 7600 0.4903 0.7708 0.7704
0.4194 44.83 7800 0.4950 0.7648 0.7643
0.424 45.98 8000 0.4897 0.7694 0.7690
0.4236 47.13 8200 0.4926 0.7670 0.7665
0.4186 48.28 8400 0.4926 0.7669 0.7665
0.4177 49.43 8600 0.4937 0.7662 0.7657
0.4183 50.57 8800 0.4941 0.7669 0.7665
0.4197 51.72 9000 0.4950 0.7659 0.7654
0.4179 52.87 9200 0.4951 0.7655 0.7650
0.4188 54.02 9400 0.4934 0.7673 0.7668
0.4183 55.17 9600 0.4939 0.7673 0.7668
0.4171 56.32 9800 0.4922 0.7687 0.7683
0.4187 57.47 10000 0.4928 0.7684 0.7679

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.