afaji's picture
distilbert_cheat_massive
7c72879 verified
metadata
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
  - generated_from_trainer
datasets:
  - massive
metrics:
  - f1
model-index:
  - name: results
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: massive
          type: massive
          config: en-US
          split: test
          args: en-US
        metrics:
          - name: F1
            type: f1
            value: 0.9734295558770142

results

This model is a fine-tuned version of distilbert/distilbert-base-uncased on the massive dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0231
  • F1: 0.9734

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss F1
3.8235 0.5 185 3.7551 0.0022
3.5949 0.99 370 3.1246 0.0454
2.8705 1.49 555 2.4379 0.1543
2.3444 1.99 740 1.7732 0.2967
1.7151 2.49 925 1.2983 0.4403
1.3959 2.98 1110 0.9965 0.5490
0.9919 3.48 1295 0.7098 0.6880
0.9495 3.98 1480 0.5798 0.7014
0.6 4.48 1665 0.4419 0.7408
0.5952 4.97 1850 0.3653 0.7522
0.3715 5.47 2035 0.3077 0.7957
0.3783 5.97 2220 0.2050 0.8453
0.196 6.47 2405 0.1532 0.8386
0.22 6.96 2590 0.0968 0.8871
0.1117 7.46 2775 0.0725 0.9057
0.1065 7.96 2960 0.0458 0.9265
0.0644 8.45 3145 0.0378 0.9336
0.0526 8.95 3330 0.0324 0.9616
0.0521 9.45 3515 0.0251 0.9708
0.0302 9.95 3700 0.0231 0.9734

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2