--- base_model: old_models/LaBSE/0_Transformer tags: - generated_from_trainer model-index: - name: bert_labse-finetuning-unhealthyConv-dropout005-epochs-10 results: [] --- # bert_labse-finetuning-unhealthyConv-dropout005-epochs-10 This model is a fine-tuned version of [old_models/LaBSE/0_Transformer](https://huggingface.co/old_models/LaBSE/0_Transformer) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7820 - Mse: 0.7820 - Rmse: 0.8843 - Mae: 0.4988 - R2: 0.8587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae | R2 | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:| | 1.5159 | 1.0 | 3389 | 1.2618 | 1.2618 | 1.1233 | 0.7882 | 0.7720 | | 1.0247 | 2.0 | 6778 | 1.1135 | 1.1135 | 1.0552 | 0.7067 | 0.7988 | | 0.7849 | 3.0 | 10167 | 1.1353 | 1.1353 | 1.0655 | 0.7289 | 0.7949 | | 0.6271 | 4.0 | 13556 | 0.9255 | 0.9255 | 0.9620 | 0.6331 | 0.8328 | | 0.5029 | 5.0 | 16945 | 0.9135 | 0.9135 | 0.9558 | 0.6148 | 0.8349 | | 0.3947 | 6.0 | 20334 | 0.8166 | 0.8166 | 0.9036 | 0.5446 | 0.8525 | | 0.3264 | 7.0 | 23723 | 0.8280 | 0.8280 | 0.9099 | 0.5552 | 0.8504 | | 0.2774 | 8.0 | 27112 | 0.8125 | 0.8125 | 0.9014 | 0.5408 | 0.8532 | | 0.2245 | 9.0 | 30501 | 0.7870 | 0.7870 | 0.8871 | 0.5034 | 0.8578 | | 0.2028 | 10.0 | 33890 | 0.7820 | 0.7820 | 0.8843 | 0.4988 | 0.8587 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3