You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

finetuned-parsbert-uncased-ArmanEmo

This model is a fine-tuned version of HooshvareLab/bert-base-parsbert-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0060
  • Accuracy: 1.0
  • Precision Macro: 1.0
  • Recall Macro: 1.0
  • F1 Macro: 1.0
  • F1 C0: 1.0
  • F1 C1: 1.0
  • F1 C2: 1.0
  • F1 C3: 1.0
  • F1 C4: 1.0
  • F1 C5: 1.0
  • F1 C6: 1.0
  • Recall C0: 1.0
  • Recall C1: 1.0
  • Recall C2: 1.0
  • Recall C3: 1.0
  • Recall C4: 1.0
  • Recall C5: 1.0
  • Recall C6: 1.0
  • Precision C0: 1.0
  • Precision C1: 1.0
  • Precision C2: 1.0
  • Precision C3: 1.0
  • Precision C4: 1.0
  • Precision C5: 1.0
  • Precision C6: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Macro Recall Macro F1 Macro F1 C0 F1 C1 F1 C2 F1 C3 F1 C4 F1 C5 F1 C6 Recall C0 Recall C1 Recall C2 Recall C3 Recall C4 Recall C5 Recall C6 Precision C0 Precision C1 Precision C2 Precision C3 Precision C4 Precision C5 Precision C6
No log 1.0 144 0.3531 0.5065 0.3687 0.3986 0.3583 0.6766 0.3370 0.4966 0.0 0.4164 0.0 0.5817 0.9091 0.2366 0.7143 0.0 0.3862 0.0 0.5440 0.5388 0.5849 0.3806 0.0 0.4516 0.0 0.625
No log 2.0 288 0.2289 0.7298 0.5210 0.5631 0.5333 0.8451 0.7790 0.6063 0.0 0.6905 0.0 0.8120 0.9818 0.8206 0.5 0.0 0.8 0.0 0.8394 0.7418 0.7414 0.77 0.0 0.6073 0.0 0.7864
No log 3.0 432 0.1129 0.9270 0.9239 0.8955 0.9061 0.9762 0.9492 0.8910 0.9027 0.8792 0.8108 0.9333 0.9709 0.9618 0.9026 0.8947 0.9034 0.6923 0.9430 0.9816 0.9368 0.8797 0.9107 0.8562 0.9783 0.9239
0.3027 4.0 576 0.0567 0.9652 0.9491 0.9545 0.9507 0.9909 0.9847 0.9161 0.9402 0.9565 0.8872 0.9796 0.9927 0.9847 0.8506 0.9649 0.9862 0.9077 0.9948 0.9891 0.9847 0.9924 0.9167 0.9286 0.8676 0.9648
0.3027 5.0 720 0.0296 0.9844 0.9736 0.9819 0.9776 0.9964 0.9885 0.9673 0.9913 0.9862 0.9185 0.9948 0.9927 0.9847 0.9610 1.0 0.9862 0.9538 0.9948 1.0 0.9923 0.9737 0.9828 0.9862 0.8857 0.9948
0.3027 6.0 864 0.0144 0.9965 0.9939 0.9937 0.9938 0.9982 1.0 0.9935 0.9913 0.9965 0.9767 1.0 1.0 1.0 0.9935 1.0 0.9931 0.9692 1.0 0.9964 1.0 0.9935 0.9828 1.0 0.9844 1.0
0.0482 7.0 1008 0.0088 0.9991 0.9993 0.9990 0.9991 1.0 1.0 1.0 1.0 0.9965 1.0 0.9974 1.0 1.0 1.0 1.0 0.9931 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.9948
0.0482 8.0 1152 0.0069 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
0.0482 9.0 1296 0.0063 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
0.0482 10.0 1440 0.0060 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
0
Safetensors
Model size
163M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mohammadhabp/finetuned-parsbert-uncased-ArmanEmo

Finetuned
(10)
this model