--- base_model: allenai/scibert_scivocab_uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: uniBERT.SciBERT.2 results: [] --- # uniBERT.SciBERT.2 This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5863 - Accuracy: (0.5884718498659517,) - F1: (0.5835493983611322,) - Precision: (0.5880118425320139,) - Recall: 0.5885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:---------------------:|:----------------------:|:---------------------:|:------:| | 2.6736 | 1.0 | 187 | 2.1773 | (0.3847184986595174,) | (0.3652816466654799,) | (0.3985942864108866,) | 0.3847 | | 1.6286 | 2.0 | 374 | 1.6625 | (0.4906166219839142,) | (0.48229779243148563,) | (0.5287776487828597,) | 0.4906 | | 1.1733 | 3.0 | 561 | 1.5601 | (0.5281501340482574,) | (0.5221085418789655,) | (0.5430006354909301,) | 0.5282 | | 0.8032 | 4.0 | 748 | 1.4738 | (0.5549597855227882,) | (0.5499270608655985,) | (0.5615902558999348,) | 0.5550 | | 0.5888 | 5.0 | 935 | 1.4584 | (0.5603217158176944,) | (0.5559524005998449,) | (0.5684946987230237,) | 0.5603 | | 0.4449 | 6.0 | 1122 | 1.4952 | (0.5764075067024129,) | (0.5740862941630532,) | (0.5860221500122856,) | 0.5764 | | 0.271 | 7.0 | 1309 | 1.5141 | (0.5777479892761395,) | (0.5724486836239684,) | (0.5756237402682504,) | 0.5777 | | 0.2036 | 8.0 | 1496 | 1.5745 | (0.5737265415549598,) | (0.5706283325637723,) | (0.5784921965802793,) | 0.5737 | | 0.1993 | 9.0 | 1683 | 1.5754 | (0.5831099195710456,) | (0.5792457295024093,) | (0.5837479506310695,) | 0.5831 | | 0.1485 | 10.0 | 1870 | 1.5863 | (0.5884718498659517,) | (0.5835493983611322,) | (0.5880118425320139,) | 0.5885 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2