gokuls's picture
End of training
a321ae7
metadata
language:
  - en
tags:
  - generated_from_trainer
datasets:
  - glue
metrics:
  - accuracy
model-index:
  - name: hBERTv1_new_pretrain_48_KD_qnli
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE QNLI
          type: glue
          config: qnli
          split: validation
          args: qnli
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6009518579535054

hBERTv1_new_pretrain_48_KD_qnli

This model is a fine-tuned version of gokuls/bert_12_layer_model_v1_complete_training_new_48_KD on the GLUE QNLI dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6648
  • Accuracy: 0.6010

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 10
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6818 1.0 819 0.6669 0.5966
0.6689 2.0 1638 0.6732 0.5858
0.6675 3.0 2457 0.6721 0.5810
0.663 4.0 3276 0.6793 0.5832
0.66 5.0 4095 0.6663 0.5999
0.6574 6.0 4914 0.6648 0.6010
0.6591 7.0 5733 0.6781 0.5731
0.659 8.0 6552 0.6685 0.5951
0.6697 9.0 7371 0.6793 0.5792
0.6755 10.0 8190 0.6829 0.5698
0.6794 11.0 9009 0.6780 0.5773

Framework versions

  • Transformers 4.30.2
  • Pytorch 1.14.0a0+410ce96
  • Datasets 2.12.0
  • Tokenizers 0.13.3