hello-custom / README.md
philschmid's picture
philschmid HF staff
Upload README.md
5e3ec76
|
raw
history blame
4.15 kB
metadata
language:
  - en
thumbnail: >-
  https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
  - text-classification
  - emotion
  - pytorch
license: apache-2.0
datasets:
  - emotion
metrics:
  - Accuracy, F1 Score
model-index:
  - name: bhadresh-savani/distilbert-base-uncased-emotion
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: emotion
          type: emotion
          config: default
          split: test
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.927
            verified: true
          - name: Precision Macro
            type: precision
            value: 0.8880230732280744
            verified: true
          - name: Precision Micro
            type: precision
            value: 0.927
            verified: true
          - name: Precision Weighted
            type: precision
            value: 0.9272902840835793
            verified: true
          - name: Recall Macro
            type: recall
            value: 0.8790126653780703
            verified: true
          - name: Recall Micro
            type: recall
            value: 0.927
            verified: true
          - name: Recall Weighted
            type: recall
            value: 0.927
            verified: true
          - name: F1 Macro
            type: f1
            value: 0.8825061528287809
            verified: true
          - name: F1 Micro
            type: f1
            value: 0.927
            verified: true
          - name: F1 Weighted
            type: f1
            value: 0.926876082854655
            verified: true
          - name: loss
            type: loss
            value: 0.17403268814086914
            verified: true

Distilbert-base-uncased-emotion

Model description:

Distilbert is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.

Distilbert-base-uncased finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters

 learning rate 2e-5, 
 batch size 64,
 num_train_epochs=8,

Model Performance Comparision on Emotion Dataset from Twitter:

Model Accuracy F1 Score Test Sample per Second
Distilbert-base-uncased-emotion 93.8 93.79 398.69
Bert-base-uncased-emotion 94.05 94.06 190.152
Roberta-base-emotion 93.95 93.97 195.639
Albert-base-v2-emotion 93.6 93.65 182.794

How to Use the model:

from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)

"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337}, 
{'label': 'joy', 'score': 0.9959300756454468}, 
{'label': 'love', 'score': 0.0009452480007894337}, 
{'label': 'anger', 'score': 0.0018055217806249857}, 
{'label': 'fear', 'score': 0.00041110432357527316}, 
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""

Dataset:

Twitter-Sentiment-Analysis.

Training procedure

Colab Notebook

Eval results

{
'test_accuracy': 0.938,
 'test_f1': 0.937932884041714,
 'test_loss': 0.1472451239824295,
 'test_mem_cpu_alloc_delta': 0,
 'test_mem_cpu_peaked_delta': 0,
 'test_mem_gpu_alloc_delta': 0,
 'test_mem_gpu_peaked_delta': 163454464,
 'test_runtime': 5.0164,
 'test_samples_per_second': 398.69
 }

Reference: