Edit model card

GUE_EMP_H3K14ac-seqsight_4096_512_27M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_4096_512_27M on the mahdibaghbanzadeh/GUE_EMP_H3K14ac dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4855
  • F1 Score: 0.7679
  • Accuracy: 0.7679

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5685 0.97 200 0.5071 0.7590 0.7573
0.5158 1.93 400 0.4967 0.7661 0.7646
0.5004 2.9 600 0.5205 0.7497 0.7489
0.4953 3.86 800 0.4826 0.7799 0.7794
0.4864 4.83 1000 0.5116 0.7616 0.7604
0.4783 5.8 1200 0.4979 0.7659 0.7643
0.4724 6.76 1400 0.4866 0.7782 0.7767
0.4679 7.73 1600 0.4871 0.7746 0.7731
0.4598 8.7 1800 0.4984 0.7704 0.7688
0.4564 9.66 2000 0.4871 0.7722 0.7707
0.4542 10.63 2200 0.5008 0.7704 0.7688
0.4405 11.59 2400 0.4907 0.7687 0.7673
0.4399 12.56 2600 0.5029 0.7700 0.7685
0.4313 13.53 2800 0.5014 0.7704 0.7688
0.4281 14.49 3000 0.4998 0.7670 0.7661
0.4179 15.46 3200 0.5087 0.7690 0.7688
0.4142 16.43 3400 0.4976 0.7741 0.7728
0.4054 17.39 3600 0.5134 0.7661 0.7649
0.3991 18.36 3800 0.5143 0.7586 0.7585
0.3961 19.32 4000 0.5153 0.7682 0.7670
0.3849 20.29 4200 0.5254 0.7655 0.7655
0.3882 21.26 4400 0.5235 0.7719 0.7703
0.3755 22.22 4600 0.5317 0.7686 0.7673
0.3739 23.19 4800 0.5277 0.7739 0.7728
0.3711 24.15 5000 0.5461 0.7687 0.7673
0.3615 25.12 5200 0.5502 0.7692 0.7676
0.3538 26.09 5400 0.5475 0.7669 0.7655
0.3495 27.05 5600 0.5556 0.7693 0.7679
0.3478 28.02 5800 0.5456 0.7684 0.7673
0.3456 28.99 6000 0.5483 0.7615 0.7607
0.336 29.95 6200 0.5668 0.7645 0.7631
0.3345 30.92 6400 0.5601 0.7614 0.7616
0.3379 31.88 6600 0.5618 0.7653 0.7643
0.3231 32.85 6800 0.5753 0.7600 0.7585
0.3218 33.82 7000 0.5812 0.7652 0.7637
0.3192 34.78 7200 0.5803 0.7633 0.7622
0.3162 35.75 7400 0.5773 0.7640 0.7628
0.3095 36.71 7600 0.5939 0.7628 0.7619
0.3109 37.68 7800 0.5872 0.7578 0.7564
0.3036 38.65 8000 0.5988 0.7640 0.7628
0.3067 39.61 8200 0.5909 0.7552 0.7549
0.3034 40.58 8400 0.5953 0.7601 0.7589
0.2906 41.55 8600 0.6200 0.7609 0.7595
0.3006 42.51 8800 0.5989 0.7618 0.7607
0.293 43.48 9000 0.6146 0.7623 0.7610
0.2939 44.44 9200 0.6083 0.7613 0.7601
0.2909 45.41 9400 0.6147 0.7593 0.7582
0.2915 46.38 9600 0.6134 0.7607 0.7595
0.2929 47.34 9800 0.6081 0.7574 0.7567
0.2868 48.31 10000 0.6106 0.7599 0.7592

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.