Stance-Tw / README.md
eevvgg's picture
create model card 2nd version
9ed93d8
|
raw
history blame
2.39 kB
metadata
tags:
  - text
  - stance
  - classification
language:
  - en
model-index:
  - name: BEtMan-Tw
    results:
      - task:
          type: stance-classification
          name: Text Classification
        dataset:
          type: stance
          name: stance
        metrics:
          - type: f1
            value: 75.8
          - type: accuracy
            value: 76.2

BERTweet_EmotAn6

This model is a fine-tuned version of j-hartmann/sentiment-roberta-large-english-3-classes to predict 3 categories.

# Model usage
from transformers import pipeline

model_path = "eevvgg/BEtMan-Tw"
cls_task = pipeline(task = "text-classification", model = model_path, tokenizer = model_path)#,  device=0 

sequence = ['his rambling has no clear ideas behind it', 
            'That has nothing to do with medical care',
            "Turns around and shows how qualified she is because of her political career.",
            'She has very little to gain by speaking too much']
            
result = cls_task(sequence)

labels = [i['label'] for i in result]

labels # ['attack', 'neutral', 'support', 'attack']
                                        

Intended uses & limitations

Classification in short text up to 200 tokens (maxlen).

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 4e-5, 'decay': 0.01}

Trained for 3 epochs, mini-batch size of 8.

  • loss: 0.719

Evaluation data

It achieves the following results on the evaluation set:

  • macro f1-score: 0.758

  • weighted f1-score: 0.762

  • accuracy: 0.762

            precision    recall  f1-score   support
    
         0      0.762     0.770     0.766       200
         1      0.759     0.775     0.767       191
         2      0.769     0.714     0.741        84