--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - surrey-nlp/PLOD-unfiltered metrics: - precision - recall - f1 - accuracy widget: - text: Light dissolved inorganic carbon (DIC) resulting from the oxidation of hydrocarbons. - text: RAFs are plotted for a selection of neurons in the dorsal zone (DZ) of auditory cortex in Figure 1. - text: Images were acquired using a GE 3.0T MRI scanner with an upgrade for echo-planar imaging (EPI). base_model: albert-large-v2 model-index: - name: albert-large-v2-finetuned-ner_with_callbacks results: - task: type: token-classification name: Token Classification dataset: name: surrey-nlp/PLOD-unfiltered type: token-classification args: PLODunfiltered metrics: - type: precision value: 0.9655166719570215 name: Precision - type: recall value: 0.9608483288141474 name: Recall - type: f1 value: 0.9631768437660728 name: F1 - type: accuracy value: 0.9589410429715819 name: Accuracy --- # albert-large-v2-finetuned-ner_with_callbacks This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the [PLOD-unfiltered](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) dataset. It achieves the following results on the evaluation set: - Loss: 0.1235 - Precision: 0.9655 - Recall: 0.9608 - F1: 0.9632 - Accuracy: 0.9589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1377 | 0.49 | 7000 | 0.1294 | 0.9563 | 0.9422 | 0.9492 | 0.9436 | | 0.1244 | 0.98 | 14000 | 0.1165 | 0.9589 | 0.9504 | 0.9546 | 0.9499 | | 0.107 | 1.48 | 21000 | 0.1140 | 0.9603 | 0.9509 | 0.9556 | 0.9511 | | 0.1088 | 1.97 | 28000 | 0.1086 | 0.9613 | 0.9551 | 0.9582 | 0.9536 | | 0.0918 | 2.46 | 35000 | 0.1059 | 0.9617 | 0.9582 | 0.9600 | 0.9556 | | 0.0847 | 2.95 | 42000 | 0.1067 | 0.9620 | 0.9586 | 0.9603 | 0.9559 | | 0.0734 | 3.44 | 49000 | 0.1188 | 0.9646 | 0.9588 | 0.9617 | 0.9574 | | 0.0725 | 3.93 | 56000 | 0.1065 | 0.9660 | 0.9599 | 0.9630 | 0.9588 | | 0.0547 | 4.43 | 63000 | 0.1273 | 0.9662 | 0.9602 | 0.9632 | 0.9590 | | 0.0542 | 4.92 | 70000 | 0.1235 | 0.9655 | 0.9608 | 0.9632 | 0.9589 | | 0.0374 | 5.41 | 77000 | 0.1401 | 0.9647 | 0.9613 | 0.9630 | 0.9586 | | 0.0417 | 5.9 | 84000 | 0.1380 | 0.9641 | 0.9622 | 0.9632 | 0.9588 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1