Edit model card

thermo-predictor-thermo-evotuning-prot_bert

This model is a fine-tuned version of thundaa/thermo-evotuning-prot_bert on the cradle-bio/tape-thermostability dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1617
  • Spearmanr: 0.6914

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 256
  • eval_batch_size: 256
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 16384
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Spearmanr
0.4734 0.68 2 0.3146 0.3359
0.4392 1.68 4 0.2936 0.3407
0.4034 2.68 6 0.2633 0.3696
0.3669 3.68 8 0.2437 0.3903
0.3496 4.68 10 0.2377 0.4102
0.3351 5.68 12 0.2285 0.4204
0.3289 6.68 14 0.2267 0.4180
0.3267 7.68 16 0.2258 0.4242
0.3177 8.68 18 0.2206 0.4295
0.3116 9.68 20 0.2150 0.4365
0.3039 10.68 22 0.2115 0.4365
0.2985 11.68 24 0.2062 0.4469
0.2927 12.68 26 0.2045 0.4531
0.2885 13.68 28 0.2005 0.4603
0.2838 14.68 30 0.1987 0.4690
0.2806 15.68 32 0.1975 0.4744
0.2772 16.68 34 0.1970 0.4765
0.2728 17.68 36 0.1939 0.4845
0.2684 18.68 38 0.1931 0.4858
0.2641 19.68 40 0.1925 0.4936
0.2608 20.68 42 0.1905 0.4929
0.2566 21.68 44 0.1886 0.5049
0.2518 22.68 46 0.1875 0.5095
0.2467 23.68 48 0.1869 0.5141
0.2424 24.68 50 0.1859 0.5161
0.2375 25.68 52 0.1850 0.5223
0.2329 26.68 54 0.1851 0.5210
0.2279 27.68 56 0.1850 0.5294
0.2226 28.68 58 0.1837 0.5310

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
83