buruzaemon's picture
Update README.md
1e07ac4 verified
|
raw
history blame
No virus
2.19 kB
metadata
license: apache-2.0
base_model: distilbert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: distilbert-base-uncased-finetuned-clinc
    results: []

distilbert-base-uncased-finetuned-clinc

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7761
  • Accuracy: 0.9174

Model description

This is an initial example of knowledge-distillation where the student loss is all cross-entropy loss LCEL_{CE} of the ground-truth labels and none of the knowledge-distillation loss LKDL_{KD}.

Intended uses & limitations

More information needed

Training and evaluation data

The training and evaluation data come straight from the train and validation splits in the clinc_oos dataset, respectively; and tokenized using the distilbert-base-uncased tokenization.

Training procedure

Please see page 224 in Chapter 8: Making Transformers Efficient in Production, Natural Language Processing with Transformers, May 2022.

Training hyperparameters

The following hyperparameters were used during training:

  • num_epochs: 5
  • alpha: 1.0
  • temperature: 2.0
  • learning_rate: 2e-05
  • train_batch_size: 48
  • eval_batch_size: 48
  • seed: 8675309
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 318 3.2998 0.7132
3.7996 2.0 636 1.8739 0.8390
3.7996 3.0 954 1.1564 0.8903
1.689 4.0 1272 0.8571 0.9126
0.9017 5.0 1590 0.7761 0.9174

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1