Edit model card

GUE_EMP_H3K14ac-seqsight_4096_512_46M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_4096_512_46M on the mahdibaghbanzadeh/GUE_EMP_H3K14ac dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4754
  • F1 Score: 0.7754
  • Accuracy: 0.7746

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5415 0.97 200 0.4991 0.7599 0.7582
0.4981 1.93 400 0.4800 0.7757 0.7743
0.4837 2.9 600 0.4962 0.7656 0.7643
0.4772 3.86 800 0.4695 0.7792 0.7779
0.471 4.83 1000 0.5198 0.7598 0.7589
0.4605 5.8 1200 0.4931 0.7705 0.7691
0.4537 6.76 1400 0.4735 0.7818 0.7803
0.4446 7.73 1600 0.4716 0.7838 0.7825
0.4392 8.7 1800 0.4845 0.7800 0.7785
0.4285 9.66 2000 0.4860 0.7704 0.7688
0.427 10.63 2200 0.5009 0.7794 0.7779
0.4138 11.59 2400 0.4853 0.7758 0.7746
0.409 12.56 2600 0.4986 0.7805 0.7794
0.3984 13.53 2800 0.5008 0.7647 0.7631
0.3934 14.49 3000 0.5097 0.7713 0.7697
0.377 15.46 3200 0.5298 0.7762 0.7749
0.3789 16.43 3400 0.5258 0.7698 0.7682
0.3651 17.39 3600 0.5315 0.7672 0.7658
0.356 18.36 3800 0.5486 0.7702 0.7688
0.3535 19.32 4000 0.5380 0.7740 0.7728
0.3368 20.29 4200 0.5776 0.7764 0.7758
0.3397 21.26 4400 0.5543 0.7727 0.7713
0.3299 22.22 4600 0.5806 0.7677 0.7661
0.3246 23.19 4800 0.5656 0.7772 0.7758
0.3155 24.15 5000 0.6116 0.7749 0.7734
0.3081 25.12 5200 0.5955 0.7653 0.7637
0.3004 26.09 5400 0.5893 0.7790 0.7776
0.3003 27.05 5600 0.6006 0.7740 0.7725
0.2921 28.02 5800 0.6405 0.7692 0.7676
0.2845 28.99 6000 0.6178 0.7682 0.7667
0.2802 29.95 6200 0.6065 0.7690 0.7676
0.2781 30.92 6400 0.5852 0.7805 0.7797
0.2693 31.88 6600 0.6314 0.7724 0.7710
0.2647 32.85 6800 0.6444 0.7695 0.7679
0.2607 33.82 7000 0.6346 0.7745 0.7731
0.2542 34.78 7200 0.6513 0.7682 0.7667
0.257 35.75 7400 0.6532 0.7611 0.7595
0.2466 36.71 7600 0.6450 0.7733 0.7725
0.2456 37.68 7800 0.6273 0.7704 0.7691
0.2411 38.65 8000 0.6753 0.7705 0.7691
0.2438 39.61 8200 0.6777 0.7700 0.7688
0.2326 40.58 8400 0.6991 0.7704 0.7688
0.2391 41.55 8600 0.6810 0.7670 0.7655
0.2335 42.51 8800 0.6759 0.7719 0.7707
0.231 43.48 9000 0.6950 0.7715 0.7700
0.2292 44.44 9200 0.6988 0.7682 0.7667
0.2291 45.41 9400 0.6996 0.7682 0.7667
0.2188 46.38 9600 0.7126 0.7703 0.7688
0.2218 47.34 9800 0.7034 0.7696 0.7682
0.2218 48.31 10000 0.7038 0.7705 0.7691

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.