--- base_model: allenai/scibert_scivocab_uncased tags: - generated_from_trainer model-index: - name: CRAFT_SciBERT_NER results: [] --- # CRAFT_SciBERT_NER This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1143 - Seqeval classification report: precision recall f1-score support CHEBI 0.74 0.70 0.72 457 CL 0.82 0.75 0.78 1099 GGP 0.92 0.93 0.93 2232 GO 0.78 0.84 0.81 2508 SO 0.83 0.81 0.82 1365 Taxon 0.99 0.99 0.99 87655 micro avg 0.98 0.98 0.98 95316 macro avg 0.85 0.84 0.84 95316 weighted avg 0.98 0.98 0.98 95316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Seqeval classification report | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | No log | 1.0 | 347 | 0.1140 | precision recall f1-score support CHEBI 0.66 0.69 0.67 457 CL 0.83 0.69 0.75 1099 GGP 0.89 0.93 0.91 2232 GO 0.76 0.85 0.80 2508 SO 0.79 0.73 0.76 1365 Taxon 0.99 0.99 0.99 87655 micro avg 0.97 0.97 0.97 95316 macro avg 0.82 0.81 0.81 95316 weighted avg 0.97 0.97 0.97 95316 | | 0.1263 | 2.0 | 695 | 0.1126 | precision recall f1-score support CHEBI 0.73 0.69 0.71 457 CL 0.85 0.72 0.78 1099 GGP 0.91 0.93 0.92 2232 GO 0.74 0.87 0.80 2508 SO 0.82 0.80 0.81 1365 Taxon 0.99 0.99 0.99 87655 micro avg 0.97 0.97 0.97 95316 macro avg 0.84 0.83 0.83 95316 weighted avg 0.97 0.97 0.97 95316 | | 0.0326 | 3.0 | 1041 | 0.1143 | precision recall f1-score support CHEBI 0.74 0.70 0.72 457 CL 0.82 0.75 0.78 1099 GGP 0.92 0.93 0.93 2232 GO 0.78 0.84 0.81 2508 SO 0.83 0.81 0.82 1365 Taxon 0.99 0.99 0.99 87655 micro avg 0.98 0.98 0.98 95316 macro avg 0.85 0.84 0.84 95316 weighted avg 0.98 0.98 0.98 95316 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0