thundaa's picture
update model card README.md
9e7dd8f
|
raw
history blame
3.73 kB
metadata
license: apache-2.0
tags:
  - protein language model
  - generated_from_trainer
datasets:
  - train
metrics:
  - spearmanr
model-index:
  - name: tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: cradle-bio/tape-fluorescence
          type: train
        metrics:
          - name: Spearmanr
            type: spearmanr
            value: 0.6085202769301487

tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert

This model is a fine-tuned version of thundaa/tape-fluorescence-evotuning-DistilProtBert on the cradle-bio/tape-fluorescence dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2716
  • Spearmanr: 0.6085

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 40
  • eval_batch_size: 40
  • seed: 17
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 2560
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Spearmanr
5.9308 0.93 7 1.6932 0.0822
1.0148 1.93 14 0.7407 0.1233
0.7748 2.93 21 0.7388 0.3237
0.7444 3.93 28 0.8205 0.4712
0.7623 4.93 35 0.7168 0.4582
0.7117 5.93 42 0.6898 0.4839
0.7987 6.93 49 1.1860 0.3994
0.8235 7.93 56 0.7290 0.4122
1.0447 8.93 63 1.8475 0.4169
0.9244 9.93 70 0.8985 0.4361
0.7392 10.93 77 0.7053 0.4709
0.5879 11.93 84 0.4930 0.4761
0.5723 12.93 91 0.9298 0.4765
0.7221 13.93 98 0.9479 0.4866
1.0731 14.93 105 0.5306 0.5040
0.5242 15.93 112 0.6331 0.4938
0.5606 16.93 119 0.4096 0.5060
0.5314 17.93 126 0.5781 0.5130
0.4384 18.93 133 0.3880 0.5393
0.4117 19.93 140 0.4584 0.5504
0.4387 20.93 147 0.3611 0.5674
0.3613 21.93 154 0.4159 0.5806
0.5157 22.93 161 0.4041 0.5869
0.4049 23.93 168 0.3187 0.5888
0.3318 24.93 175 0.3206 0.5889
0.3317 25.93 182 0.2964 0.5941
0.303 26.93 189 0.2803 0.6006
0.3058 27.93 196 0.2758 0.6042
0.2988 28.93 203 0.3016 0.6049
0.2814 29.93 210 0.2716 0.6085

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1