|
--- |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- tweet_eval |
|
model-index: |
|
- name: roberta-sentiment-analysis-finetune |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# roberta-sentiment |
|
|
|
RoBERTa est un modèle d'analyse sentimentale développé par Facebook AI. Il est basé |
|
sur l'architecture des transformers et est pré-entraîné sur une grande quantité de |
|
données variées. RoBERTa est capable de comprendre et prédire avec précision le ton |
|
émotionnel (positif, négatif ou neutre) d'un texte. |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 64 |
|
- eval_batch_size: 64 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 50 |
|
- num_epochs: 10 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| 0.5451 | 1.0 | 713 | 0.5422 | |
|
| 0.4785 | 2.0 | 1426 | 0.5585 | |
|
| 0.4199 | 3.0 | 2139 | 0.5785 | |
|
| 0.3608 | 4.0 | 2852 | 0.6038 | |
|
| 0.3117 | 5.0 | 3565 | 0.6713 | |
|
| 0.2684 | 6.0 | 4278 | 0.7366 | |
|
| 0.2403 | 7.0 | 4991 | 0.7737 | |
|
| 0.2137 | 8.0 | 5704 | 0.8276 | |
|
| 0.1926 | 9.0 | 6417 | 0.8597 | |
|
| 0.1778 | 10.0 | 7130 | 0.8863 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.25.1 |
|
- Pytorch 1.13.0+cu116 |
|
- Datasets 2.8.0 |
|
- Tokenizers 0.13.2 |
|
|