Edit model card

GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_all

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_65536_512_47M on the mahdibaghbanzadeh/GUE_EMP_H3K79me3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6720
  • F1 Score: 0.6799
  • Accuracy: 0.6803

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 2048
  • eval_batch_size: 2048
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.664 16.67 200 0.6401 0.6299 0.6304
0.5946 33.33 400 0.6389 0.6380 0.6488
0.5639 50.0 600 0.6426 0.6450 0.6446
0.5371 66.67 800 0.6412 0.6514 0.6560
0.5172 83.33 1000 0.6499 0.6546 0.6581
0.5038 100.0 1200 0.6540 0.6534 0.6585
0.4921 116.67 1400 0.6549 0.6640 0.6650
0.4839 133.33 1600 0.6570 0.6659 0.6682
0.4754 150.0 1800 0.6598 0.6644 0.6654
0.4686 166.67 2000 0.6678 0.6695 0.6709
0.4616 183.33 2200 0.6607 0.6705 0.6709
0.4551 200.0 2400 0.6711 0.6593 0.6637
0.4511 216.67 2600 0.6789 0.6687 0.6685
0.4417 233.33 2800 0.6767 0.6714 0.6716
0.4368 250.0 3000 0.6887 0.6732 0.6737
0.4316 266.67 3200 0.6859 0.6682 0.6709
0.4266 283.33 3400 0.7035 0.6705 0.6706
0.4209 300.0 3600 0.7060 0.6617 0.6647
0.415 316.67 3800 0.7069 0.6694 0.6692
0.4083 333.33 4000 0.7094 0.6644 0.6644
0.4022 350.0 4200 0.7398 0.6621 0.6640
0.3967 366.67 4400 0.7386 0.6601 0.6623
0.3896 383.33 4600 0.7477 0.6668 0.6668
0.3849 400.0 4800 0.7197 0.6528 0.6543
0.3791 416.67 5000 0.7397 0.6602 0.6619
0.3744 433.33 5200 0.7433 0.6605 0.6616
0.3684 450.0 5400 0.7545 0.6619 0.6637
0.3626 466.67 5600 0.7832 0.6650 0.6678
0.3596 483.33 5800 0.7617 0.6638 0.6664
0.3536 500.0 6000 0.7507 0.6609 0.6619
0.3519 516.67 6200 0.7676 0.6641 0.6650
0.3473 533.33 6400 0.7612 0.6642 0.6657
0.3437 550.0 6600 0.7850 0.6601 0.6616
0.3402 566.67 6800 0.7865 0.6602 0.6612
0.3379 583.33 7000 0.8045 0.6598 0.6609
0.3344 600.0 7200 0.7939 0.6596 0.6612
0.3309 616.67 7400 0.7899 0.6598 0.6616
0.3293 633.33 7600 0.7791 0.6599 0.6602
0.3248 650.0 7800 0.7812 0.6588 0.6598
0.3227 666.67 8000 0.8036 0.6586 0.6605
0.3219 683.33 8200 0.8220 0.6582 0.6598
0.3208 700.0 8400 0.8077 0.6596 0.6605
0.3183 716.67 8600 0.8185 0.6566 0.6585
0.3172 733.33 8800 0.8053 0.6577 0.6595
0.3165 750.0 9000 0.8075 0.6628 0.6633
0.3145 766.67 9200 0.8159 0.6595 0.6612
0.3133 783.33 9400 0.8092 0.6621 0.6633
0.3126 800.0 9600 0.8099 0.6601 0.6616
0.3124 816.67 9800 0.8129 0.6610 0.6626
0.3128 833.33 10000 0.8149 0.6616 0.6633

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.