spanish-attitude
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on the jorgeortizfuentes/spanish_attitude_conll2003 dataset. It achieves the following results on the evaluation set:
- Loss: 0.6388
- Affect Precision: 0.0
- Affect Recall: 0.0
- Affect F1: 0.0
- Affect Number: 61
- Appreciation Precision: 0.2208
- Appreciation Recall: 0.3401
- Appreciation F1: 0.2677
- Appreciation Number: 294
- Judgment (j1) Precision: 0.0
- Judgment (j1) Recall: 0.0
- Judgment (j1) F1: 0.0
- Judgment (j1) Number: 2
- Social esteem (j2) Precision: 0.0
- Social esteem (j2) Recall: 0.0
- Social esteem (j2) F1: 0.0
- Social esteem (j2) Number: 2
- Social sanction (j2) Precision: 0.0
- Social sanction (j2) Recall: 0.0
- Social sanction (j2) F1: 0.0
- Social sanction (j2) Number: 1
- Capacity (j3) Precision: 0.1037
- Capacity (j3) Recall: 0.1977
- Capacity (j3) F1: 0.1360
- Capacity (j3) Number: 86
- Normality (j3) Precision: 0.0
- Normality (j3) Recall: 0.0
- Normality (j3) F1: 0.0
- Normality (j3) Number: 62
- Propriety (j3) Precision: 0.1586
- Propriety (j3) Recall: 0.2791
- Propriety (j3) F1: 0.2022
- Propriety (j3) Number: 129
- Tenacity (j3) Precision: 0.0
- Tenacity (j3) Recall: 0.0
- Tenacity (j3) F1: 0.0
- Tenacity (j3) Number: 47
- Veracity (j3) Precision: 0.0
- Veracity (j3) Recall: 0.0
- Veracity (j3) F1: 0.0
- Veracity (j3) Number: 20
- Overall Precision: 0.1792
- Overall Recall: 0.2173
- Overall F1: 0.1964
- Overall Accuracy: 0.8250
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
Training results
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
- Downloads last month
- 105
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.