GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_65536_512_47M on the mahdibaghbanzadeh/GUE_EMP_H3K36me3 dataset. It achieves the following results on the evaluation set:
- Loss: 0.7196
- F1 Score: 0.6313
- Accuracy: 0.6333
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
---|---|---|---|---|---|
0.6749 | 14.29 | 200 | 0.6594 | 0.6084 | 0.6101 |
0.6255 | 28.57 | 400 | 0.6720 | 0.6072 | 0.6101 |
0.6015 | 42.86 | 600 | 0.6764 | 0.5969 | 0.6032 |
0.5788 | 57.14 | 800 | 0.6919 | 0.6095 | 0.6124 |
0.5616 | 71.43 | 1000 | 0.6995 | 0.6028 | 0.6104 |
0.5483 | 85.71 | 1200 | 0.6893 | 0.6170 | 0.6184 |
0.5386 | 100.0 | 1400 | 0.6886 | 0.6205 | 0.6207 |
0.5316 | 114.29 | 1600 | 0.6852 | 0.6175 | 0.6173 |
0.5234 | 128.57 | 1800 | 0.7024 | 0.6158 | 0.6155 |
0.518 | 142.86 | 2000 | 0.7165 | 0.6231 | 0.6247 |
0.5102 | 157.14 | 2200 | 0.7304 | 0.6167 | 0.6218 |
0.5036 | 171.43 | 2400 | 0.7301 | 0.6204 | 0.6259 |
0.4958 | 185.71 | 2600 | 0.7247 | 0.6267 | 0.6276 |
0.4915 | 200.0 | 2800 | 0.7179 | 0.6249 | 0.6259 |
0.4845 | 214.29 | 3000 | 0.7353 | 0.6344 | 0.6370 |
0.4783 | 228.57 | 3200 | 0.7213 | 0.6297 | 0.6296 |
0.4723 | 242.86 | 3400 | 0.7260 | 0.6342 | 0.6368 |
0.4663 | 257.14 | 3600 | 0.7465 | 0.6292 | 0.6327 |
0.4598 | 271.43 | 3800 | 0.7543 | 0.6333 | 0.6342 |
0.454 | 285.71 | 4000 | 0.7691 | 0.6337 | 0.6365 |
0.4461 | 300.0 | 4200 | 0.7411 | 0.6293 | 0.6293 |
0.442 | 314.29 | 4400 | 0.7787 | 0.6264 | 0.6279 |
0.4358 | 328.57 | 4600 | 0.7773 | 0.6284 | 0.6316 |
0.4322 | 342.86 | 4800 | 0.7750 | 0.6241 | 0.6287 |
0.4251 | 357.14 | 5000 | 0.7859 | 0.6260 | 0.6290 |
0.4213 | 371.43 | 5200 | 0.8191 | 0.6295 | 0.6319 |
0.4152 | 385.71 | 5400 | 0.7943 | 0.6249 | 0.6273 |
0.4106 | 400.0 | 5600 | 0.7933 | 0.6276 | 0.6293 |
0.4072 | 414.29 | 5800 | 0.8317 | 0.6235 | 0.6241 |
0.4027 | 428.57 | 6000 | 0.8035 | 0.6268 | 0.6276 |
0.3995 | 442.86 | 6200 | 0.8059 | 0.6245 | 0.6261 |
0.3955 | 457.14 | 6400 | 0.8212 | 0.6260 | 0.6273 |
0.3922 | 471.43 | 6600 | 0.8071 | 0.6238 | 0.6247 |
0.3894 | 485.71 | 6800 | 0.8409 | 0.6251 | 0.6276 |
0.3867 | 500.0 | 7000 | 0.8482 | 0.6189 | 0.6196 |
0.3851 | 514.29 | 7200 | 0.8274 | 0.6199 | 0.6210 |
0.383 | 528.57 | 7400 | 0.8286 | 0.6211 | 0.6236 |
0.3787 | 542.86 | 7600 | 0.8477 | 0.6235 | 0.6253 |
0.3789 | 557.14 | 7800 | 0.8196 | 0.6253 | 0.6259 |
0.3763 | 571.43 | 8000 | 0.8285 | 0.6200 | 0.6210 |
0.3744 | 585.71 | 8200 | 0.8376 | 0.6222 | 0.6239 |
0.3715 | 600.0 | 8400 | 0.8462 | 0.6231 | 0.6247 |
0.3677 | 614.29 | 8600 | 0.8558 | 0.6202 | 0.6218 |
0.3692 | 628.57 | 8800 | 0.8468 | 0.6226 | 0.6244 |
0.3691 | 642.86 | 9000 | 0.8440 | 0.6214 | 0.6230 |
0.3659 | 657.14 | 9200 | 0.8636 | 0.6238 | 0.6261 |
0.366 | 671.43 | 9400 | 0.8386 | 0.6216 | 0.6230 |
0.3659 | 685.71 | 9600 | 0.8443 | 0.6214 | 0.6227 |
0.3643 | 700.0 | 9800 | 0.8483 | 0.6233 | 0.6247 |
0.3642 | 714.29 | 10000 | 0.8486 | 0.6219 | 0.6233 |
Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
- Downloads last month
- 0
Unable to determine this model’s pipeline type. Check the
docs
.