Edit model card

GUE_EMP_H3K14ac-seqsight_32768_512_43M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_32768_512_43M on the mahdibaghbanzadeh/GUE_EMP_H3K14ac dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4924
  • F1 Score: 0.7762
  • Accuracy: 0.7752

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5719 0.97 200 0.5131 0.7592 0.7576
0.516 1.93 400 0.4993 0.7691 0.7676
0.5012 2.9 600 0.5039 0.7604 0.7589
0.4962 3.86 800 0.4826 0.7744 0.7734
0.4878 4.83 1000 0.5088 0.7652 0.7637
0.4813 5.8 1200 0.4903 0.7764 0.7749
0.4734 6.76 1400 0.4825 0.7806 0.7791
0.4678 7.73 1600 0.4871 0.7731 0.7716
0.464 8.7 1800 0.4969 0.7730 0.7716
0.457 9.66 2000 0.4931 0.7761 0.7746
0.4555 10.63 2200 0.5066 0.7755 0.7740
0.4445 11.59 2400 0.4927 0.7700 0.7688
0.4455 12.56 2600 0.5078 0.7752 0.7737
0.4334 13.53 2800 0.5079 0.7677 0.7661
0.4316 14.49 3000 0.4904 0.7696 0.7682
0.4191 15.46 3200 0.4980 0.7759 0.7749
0.4206 16.43 3400 0.4976 0.7710 0.7694
0.4119 17.39 3600 0.5108 0.7670 0.7655
0.4073 18.36 3800 0.5048 0.7689 0.7691
0.3984 19.32 4000 0.5055 0.7800 0.7788
0.3956 20.29 4200 0.5051 0.7701 0.7691
0.3896 21.26 4400 0.5276 0.7695 0.7679
0.3835 22.22 4600 0.5343 0.7647 0.7631
0.3797 23.19 4800 0.5330 0.7693 0.7679
0.3742 24.15 5000 0.5308 0.7655 0.7643
0.3716 25.12 5200 0.5492 0.7650 0.7634
0.3631 26.09 5400 0.5351 0.7614 0.7598
0.3565 27.05 5600 0.5650 0.7677 0.7661
0.3511 28.02 5800 0.5519 0.7723 0.7710
0.3508 28.99 6000 0.5461 0.7672 0.7658
0.3449 29.95 6200 0.5521 0.7676 0.7664
0.3422 30.92 6400 0.5529 0.7701 0.7703
0.3384 31.88 6600 0.5605 0.7624 0.7610
0.3347 32.85 6800 0.5864 0.7611 0.7595
0.3308 33.82 7000 0.5862 0.7644 0.7628
0.3215 34.78 7200 0.6019 0.7590 0.7573
0.3212 35.75 7400 0.5779 0.7651 0.7637
0.3204 36.71 7600 0.5864 0.7660 0.7646
0.3105 37.68 7800 0.6002 0.7599 0.7582
0.3132 38.65 8000 0.5929 0.7654 0.7640
0.317 39.61 8200 0.5880 0.7680 0.7670
0.3075 40.58 8400 0.6154 0.7629 0.7613
0.3072 41.55 8600 0.6056 0.7673 0.7658
0.3029 42.51 8800 0.6055 0.7624 0.7610
0.3003 43.48 9000 0.6175 0.7647 0.7631
0.3014 44.44 9200 0.6056 0.7622 0.7607
0.299 45.41 9400 0.6095 0.7637 0.7622
0.2925 46.38 9600 0.6190 0.7637 0.7622
0.3016 47.34 9800 0.6069 0.7605 0.7592
0.297 48.31 10000 0.6072 0.7626 0.7613

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.