--- license: mit tags: - generated_from_trainer datasets: - surrey-nlp/PLOD-filtered metrics: - precision - recall - f1 - accuracy model_creators: - Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan widget: - text: Light dissolved inorganic carbon (DIC) resulting from the oxidation of hydrocarbons. - text: RAFs are plotted for a selection of neurons in the dorsal zone (DZ) of auditory cortex in Figure 1. - text: Images were acquired using a GE 3.0T MRI scanner with an upgrade for echo-planar imaging (EPI). base_model: roberta-base model-index: - name: roberta-base-finetuned-ner results: - task: type: token-classification name: Token Classification dataset: name: surrey-nlp/PLOD-filtered type: token-classification args: PLODfiltered metrics: - type: precision value: 0.9644756447594547 name: Precision - type: recall value: 0.9583209148378798 name: Recall - type: f1 value: 0.9613884293804785 name: F1 - type: accuracy value: 0.9575894768204436 name: Accuracy --- # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [PLOD-filtered](surrey-nlp/PLOD-filtered) dataset. It achieves the following results on the evaluation set: - Loss: 0.1148 - Precision: 0.9645 - Recall: 0.9583 - F1: 0.9614 - Accuracy: 0.9576 ## Model description RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations More information needed ## Training and evaluation data The model is fine-tuned using [PLOD-Filtered](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) dataset. This dataset is used for training and evaluating the model. The PLOD Dataset is published at LREC 2022. The dataset can help build sequence labeling models for the task of Abbreviation Detection. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1179 | 1.99 | 7000 | 0.1130 | 0.9602 | 0.9517 | 0.9559 | 0.9522 | | 0.0878 | 3.98 | 14000 | 0.1106 | 0.9647 | 0.9564 | 0.9606 | 0.9567 | | 0.0724 | 5.96 | 21000 | 0.1149 | 0.9646 | 0.9582 | 0.9614 | 0.9576 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1