PLOD-CW / README.md
dipteshkanojia's picture
Update README.md
a8017ff verified
metadata
annotations_creators:
  - >-
    Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Shenbin Qian, Diptesh
    Kanojia, Constantin Orasan
language_creators:
  - found
language:
  - en
license: cc-by-sa-4.0
multilinguality:
  - monolingual
size_categories:
  - 100K<n<1M
source_datasets:
  - original
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
paperswithcode_id: plod-filtered
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
dataset_info:
  features:
    - name: tokens
      sequence: string
    - name: pos_tags
      sequence: string
    - name: ner_tags
      sequence: string
  splits:
    - name: train
      num_bytes: 958388
      num_examples: 1072
    - name: validation
      num_bytes: 119188
      num_examples: 126
    - name: test
      num_bytes: 119336
      num_examples: 153
  download_size: 244828
  dataset_size: 1196912
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

PLOD: An Abbreviation Detection Dataset

This is the repository for PLOD Dataset subset being used for CW in NLP module 2023-2024 at University of Surrey.

Dataset Summary

This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.

Supported Tasks and Leaderboards

This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.

Languages

English

Dataset Structure

Data Instances

A typical data point comprises an ID, a set of tokens present in the text, a set of pos_tags for the corresponding tokens obtained via Spacy NER, and a set of ner_tags which are limited to AC for Acronym and LF for long-forms.

An example from the dataset: { 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'], 'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13], 'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] }

Data Fields

  • tokens: The tokens contained in the text.
  • pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.
  • ner_tags: The tags for abbreviations and long-forms.

Original Dataset (Only for exploration. For CW, You must USE THE PLOD-CW subset)

We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.

  1. The Filtered version can be accessed via Huggingface Datasets here and a CONLL format is present here.

  2. The Unfiltered version can be accessed via Huggingface Datasets here and a CONLL format is present here.

  3. The SDU Shared Task data we use for zero-shot testing is available here.

Dataset Card for PLOD-filtered

Table of Contents

Dataset Description

Dataset Creation

Source Data

Initial Data Collection and Normalization

Extracting the data from PLOS Journals online and then tokenization, normalization.

Who are the source language producers?

PLOS Journal

Additional Information

Dataset Curators

The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan. The subset was created by Shenbin Qian from the new clean version of PLOD to be released at LREC COLING 2024.

Licensing Information

CC-BY-SA 4.0

Citation Information

[Needs More Information]

Installation

We use the custom NER pipeline in the spaCy transformers library to train our models. This library supports training via any pre-trained language models available at the :rocket: HuggingFace repository.
Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.

OR

However, you can also reproduce the experiments via the Python notebook we provide here which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:

git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection
cd PLOD-AbbreviationDetection
pip install -r requirements.txt

Now, you can use the notebook to reproduce the experiments.

Model(s)

Our best performing models are hosted on the HuggingFace models repository

Models PLOD - Unfiltered PLOD - Filtered Description
RoBERTalarge RoBERTalarge-finetuned-abbr -soon- Fine-tuning on the RoBERTalarge language model
RoBERTabase -soon- RoBERTabase-finetuned-abbr Fine-tuning on the RoBERTabase language model
AlBERTlarge-v2 AlBERTlarge-v2-finetuned-abbDet -soon- Fine-tuning on the AlBERTlarge-v2 language model

On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.

Usage

You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.