English
jinjieyuan's picture
Upload model
1960aa4
|
raw
history blame
4.8 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - glue
metrics:
  - accuracy
model-index:
  - name: first_try
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE MNLI
          type: glue
          config: mnli
          split: validation_matched
          args: mnli
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8417412530512612

first_try

This model is a fine-tuned version of bert-base-uncased on the GLUE MNLI dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4506
  • Accuracy: 0.8417

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.3038 1.0 12272 0.4950 0.8238 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 256, 1: 256, 2: 192, 3: 320, 4: 192, 5: 384, 6: 128, 7: 256, 8: 256, 9: 256, 10: 192, 11: 256, 12: 1542, 13: 1611, 14: 1891, 15: 1877, 16: 1825, 17: 1790, 18: 1678, 19: 1544, 20: 1223, 21: 628, 22: 345, 23: 213})])
0.3038 1.0 12272 0.4592 0.8385 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 768, 1: 768, 2: 768, 3: 768, 4: 768, 5: 768, 6: 768, 7: 768, 8: 768, 9: 768, 10: 768, 11: 768, 12: 3072, 13: 3072, 14: 3072, 15: 3072, 16: 3072, 17: 3072, 18: 3072, 19: 3072, 20: 3072, 21: 3072, 22: 3072, 23: 3072})])
0.1683 2.0 24544 0.4678 0.8326 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 256, 1: 256, 2: 192, 3: 320, 4: 192, 5: 384, 6: 128, 7: 256, 8: 256, 9: 256, 10: 192, 11: 256, 12: 1542, 13: 1611, 14: 1891, 15: 1877, 16: 1825, 17: 1790, 18: 1678, 19: 1544, 20: 1223, 21: 628, 22: 345, 23: 213})])
0.1683 2.0 24544 0.4285 0.8479 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 768, 1: 768, 2: 768, 3: 768, 4: 768, 5: 768, 6: 768, 7: 768, 8: 768, 9: 768, 10: 768, 11: 768, 12: 3072, 13: 3072, 14: 3072, 15: 3072, 16: 3072, 17: 3072, 18: 3072, 19: 3072, 20: 3072, 21: 3072, 22: 3072, 23: 3072})])
0.1132 3.0 36816 0.4638 0.8381 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 256, 1: 256, 2: 192, 3: 320, 4: 192, 5: 384, 6: 128, 7: 256, 8: 256, 9: 256, 10: 192, 11: 256, 12: 1542, 13: 1611, 14: 1891, 15: 1877, 16: 1825, 17: 1790, 18: 1678, 19: 1544, 20: 1223, 21: 628, 22: 345, 23: 213})])
0.1132 3.0 36816 0.4231 0.8492 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 768, 1: 768, 2: 768, 3: 768, 4: 768, 5: 768, 6: 768, 7: 768, 8: 768, 9: 768, 10: 768, 11: 768, 12: 3072, 13: 3072, 14: 3072, 15: 3072, 16: 3072, 17: 3072, 18: 3072, 19: 3072, 20: 3072, 21: 3072, 22: 3072, 23: 3072})])
0.0894 4.0 49088 0.4678 0.8383 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 256, 1: 256, 2: 192, 3: 320, 4: 192, 5: 384, 6: 128, 7: 256, 8: 256, 9: 256, 10: 192, 11: 256, 12: 1542, 13: 1611, 14: 1891, 15: 1877, 16: 1825, 17: 1790, 18: 1678, 19: 1544, 20: 1223, 21: 628, 22: 345, 23: 213})])
0.0894 4.0 49088 0.4261 0.8497 OrderedDict([(<ElasticityDim.WIDTH: 'width'>, {0: 768, 1: 768, 2: 768, 3: 768, 4: 768, 5: 768, 6: 768, 7: 768, 8: 768, 9: 768, 10: 768, 11: 768, 12: 3072, 13: 3072, 14: 3072, 15: 3072, 16: 3072, 17: 3072, 18: 3072, 19: 3072, 20: 3072, 21: 3072, 22: 3072, 23: 3072})])

Framework versions

  • Transformers 4.29.1
  • Pytorch 1.12.1
  • Datasets 2.13.1
  • Tokenizers 0.13.3