--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: Distilbert-uncased results: [] --- # Distilbert-uncased-AS Este es un modelo de finetuning de [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) sobre un dataset propios de tweets. Se logra una error cuadrĂ¡tico bajo lo cual quiere decir que los valores predichos son muy cercanos a los observables o gold. - Loss: 0.3510 - Rmse: 0.2543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2091 | 1.0 | 642 | 0.1933 | 0.3052 | | 0.1334 | 2.0 | 1284 | 0.1909 | 0.2481 | | 0.0684 | 3.0 | 1926 | 0.2617 | 0.2466 | | 0.0355 | 4.0 | 2568 | 0.3113 | 0.2513 | | 0.0116 | 5.0 | 3210 | 0.3510 | 0.2543 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.0.1+cu117 - Datasets 2.18.0 - Tokenizers 0.19.1