Edit model card

tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert

This model is a fine-tuned version of thundaa/tape-fluorescence-evotuning-DistilProtBert on the cradle-bio/tape-fluorescence dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3377
  • Spearmanr: 0.5505

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 40
  • eval_batch_size: 40
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 2560
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Spearmanr
6.2764 0.93 7 1.9927 -0.0786
1.1206 1.93 14 0.8223 -0.1543
0.8054 2.93 21 0.6894 0.2050
0.7692 3.93 28 0.8084 0.2807
0.7597 4.93 35 0.6613 0.4003
0.7416 5.93 42 0.6803 0.3829
0.7256 6.93 49 0.6428 0.4416
0.6966 7.93 56 0.6086 0.4506
0.7603 8.93 63 0.9119 0.4697
0.9187 9.93 70 0.6048 0.4757
1.0371 10.93 77 2.0742 0.4076
1.0947 11.93 84 0.6633 0.4522
0.6946 12.93 91 0.6008 0.4123
0.6618 13.93 98 0.5931 0.4457
0.8635 14.93 105 1.9561 0.4331
0.9444 15.93 112 0.5627 0.5041
0.5535 16.93 119 0.4348 0.4840
0.9059 17.93 126 0.6704 0.5123
0.5693 18.93 133 0.4616 0.5285
0.6298 19.93 140 0.6915 0.5166
0.955 20.93 147 0.6679 0.5677
0.7866 21.93 154 0.8136 0.5559
0.6687 22.93 161 0.4782 0.5561
0.5336 23.93 168 0.4447 0.5499
0.4673 24.93 175 0.4258 0.5428
0.478 25.93 182 0.3651 0.5329
0.4023 26.93 189 0.3688 0.5428
0.3961 27.93 196 0.3692 0.5509
0.3808 28.93 203 0.3434 0.5514
0.3433 29.93 210 0.3377 0.5505

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
9

Evaluation results