demo_irony_42 / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
b2bc0f5
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - tweet_eval
metrics:
  - f1
base_model: distilbert-base-uncased
model-index:
  - name: demo_irony_42
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: tweet_eval
          type: tweet_eval
          args: irony
        metrics:
          - type: f1
            value: 0.685764300192161
            name: F1

demo_irony_42

This model is a fine-tuned version of distilbert-base-uncased on the tweet_eval dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2905
  • F1: 0.6858

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.7735294032820418e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 0
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss F1
No log 1.0 358 0.5872 0.6786
0.5869 2.0 716 0.6884 0.6952
0.3417 3.0 1074 0.9824 0.6995
0.3417 4.0 1432 1.2905 0.6858

Framework versions

  • Transformers 4.12.5
  • Pytorch 1.9.1
  • Datasets 1.16.1
  • Tokenizers 0.10.3