Edit model card

GUE_EMP_H3K4me2-seqsight_32768_512_43M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_32768_512_43M on the mahdibaghbanzadeh/GUE_EMP_H3K4me2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5885
  • F1 Score: 0.6910
  • Accuracy: 0.6960

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.6489 1.04 200 0.6205 0.6282 0.6572
0.6141 2.08 400 0.6325 0.6494 0.6468
0.6004 3.12 600 0.6101 0.6761 0.6777
0.5966 4.17 800 0.6098 0.6706 0.6696
0.5871 5.21 1000 0.6038 0.6727 0.6787
0.5799 6.25 1200 0.6059 0.6757 0.6748
0.5724 7.29 1400 0.6034 0.6771 0.6764
0.5654 8.33 1600 0.6109 0.6796 0.6784
0.5613 9.38 1800 0.6213 0.6759 0.6735
0.554 10.42 2000 0.5952 0.6836 0.6885
0.551 11.46 2200 0.6100 0.6832 0.6852
0.5368 12.5 2400 0.6070 0.6786 0.6804
0.532 13.54 2600 0.6329 0.6777 0.6758
0.5253 14.58 2800 0.6159 0.6759 0.6804
0.5216 15.62 3000 0.6318 0.6718 0.6703
0.5124 16.67 3200 0.6345 0.6771 0.6768
0.5005 17.71 3400 0.6745 0.6740 0.6716
0.4965 18.75 3600 0.6430 0.6810 0.6804
0.4911 19.79 3800 0.6654 0.6789 0.6771
0.4822 20.83 4000 0.6607 0.6792 0.6771
0.4738 21.88 4200 0.6825 0.6787 0.6768
0.466 22.92 4400 0.6785 0.6746 0.6725
0.4655 23.96 4600 0.6764 0.6757 0.6745
0.455 25.0 4800 0.7236 0.6651 0.6628
0.4458 26.04 5000 0.7467 0.6646 0.6621
0.4433 27.08 5200 0.7294 0.6622 0.6598
0.434 28.12 5400 0.6890 0.6697 0.6693
0.4279 29.17 5600 0.7299 0.6700 0.6680
0.4234 30.21 5800 0.7531 0.6694 0.6673
0.4146 31.25 6000 0.7745 0.6719 0.6696
0.4129 32.29 6200 0.7660 0.6646 0.6621
0.4072 33.33 6400 0.7582 0.6675 0.6657
0.3998 34.38 6600 0.7820 0.6706 0.6693
0.3952 35.42 6800 0.8030 0.6623 0.6598
0.39 36.46 7000 0.7745 0.6719 0.6696
0.387 37.5 7200 0.7637 0.6650 0.6628
0.3819 38.54 7400 0.7709 0.6764 0.6764
0.3772 39.58 7600 0.7686 0.6702 0.6706
0.3793 40.62 7800 0.8079 0.6683 0.6660
0.3733 41.67 8000 0.8120 0.6646 0.6621
0.3666 42.71 8200 0.8165 0.6693 0.6670
0.3671 43.75 8400 0.8185 0.6651 0.6628
0.3668 44.79 8600 0.8077 0.6697 0.6676
0.362 45.83 8800 0.8043 0.6658 0.6641
0.3612 46.88 9000 0.8099 0.6661 0.6637
0.3555 47.92 9200 0.8180 0.6710 0.6689
0.3501 48.96 9400 0.8214 0.6695 0.6680
0.3515 50.0 9600 0.8309 0.6679 0.6657
0.3512 51.04 9800 0.8336 0.6694 0.6673
0.3464 52.08 10000 0.8380 0.6692 0.6670

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.