--- annotations_creators: - Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Shenbin Qian, Diptesh Kanojia, Constantin Orasan language_creators: - found language: - en license: cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 100K 2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).
3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test). # Dataset Card for PLOD-filtered ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection - **Paper:** https://arxiv.org/abs/2204.12061 - **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-filtered - **Point of Contact:** [Diptesh Kanojia](mailto:d.kanojia@surrey.ac.uk) ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization Extracting the data from PLOS Journals online and then tokenization, normalization. #### Who are the source language producers? PLOS Journal ## Additional Information ### Dataset Curators The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan. The subset was created by Shenbin Qian from the new clean version of PLOD to be released at LREC COLING 2024. ### Licensing Information CC-BY-SA 4.0 ### Citation Information [Needs More Information] ### Installation We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).
Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy. OR
However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps: ```bash git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection cd PLOD-AbbreviationDetection pip install -r requirements.txt ``` Now, you can use the notebook to reproduce the experiments. ### Model(s) Our best performing models are hosted on the HuggingFace models repository | Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description | | --- | :---: | :---: | --- | | [RoBERTalarge](https://huggingface.co/roberta-large) | [RoBERTalarge-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTalarge language model | | [RoBERTabase](https://huggingface.co/roberta-base) | -soon- | [RoBERTabase-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTabase language model | | [AlBERTlarge-v2](https://huggingface.co/albert-large-v2) | [AlBERTlarge-v2-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERTlarge-v2 language model | On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.
### Usage You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.