thext-cs-scibert / README.md
Moreno La Quatra
Create README.md
afea6b4
|
raw
history blame
5.1 kB
metadata
language: en
tags:
  - bert
  - regression
  - pytorch
pipeline:
  - text-classification
widget:
  - text: >-
      We propose a new approach, based on Transformer-based encoding, to
      highlight extraction. To the best of our knowledge, this is the first
      attempt to use transformer architectures to address automatic highlight
      generation. [SEP] Highlights are short sentences used to annotate
      scientific papers. They complement the abstract content by conveying the
      main result findings. To automate the process of paper annotation,
      highlights extraction aims at extracting from 3 to 5 paper sentences via
      supervised learning. Existing approaches rely on ad hoc linguistic
      features, which depend on the analyzed context, and apply recurrent neural
      networks, which are not effective in learning long-range text
      dependencies. This paper leverages the attention mechanism adopted in
      transformer models to improve the accuracy of sentence relevance
      estimation. Unlike existing approaches, it relies on the end-to-end
      training of a deep regression model. To attend patterns relevant to
      highlights content it also enriches sentence encodings with a
      section-level contextualization. The experimental results, achieved on
      three different benchmark datasets, show that the designed architecture is
      able to achieve significant performance improvements compared to the
      state-of-the-art.
  - text: >-
      We design a context-aware sentence-level regressor, in which the semantic
      similarity between candidate sentences and highlights is estimated by also
      attending the contextual knowledge provided by the other paper sections.
      [SEP] Highlights are short sentences used to annotate scientific papers.
      They complement the abstract content by conveying the main result
      findings. To automate the process of paper annotation, highlights
      extraction aims at extracting from 3 to 5 paper sentences via supervised
      learning. Existing approaches rely on ad hoc linguistic features, which
      depend on the analyzed context, and apply recurrent neural networks, which
      are not effective in learning long-range text dependencies. This paper
      leverages the attention mechanism adopted in transformer models to improve
      the accuracy of sentence relevance estimation. Unlike existing approaches,
      it relies on the end-to-end training of a deep regression model. To attend
      patterns relevant to highlights content it also enriches sentence
      encodings with a section-level contextualization. The experimental
      results, achieved on three different benchmark datasets, show that the
      designed architecture is able to achieve significant performance
      improvements compared to the state-of-the-art.
  - text: >-
      Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected
      highlights on the extraction performance. As expected, recall values
      increase while increasing the number of selected highlights, whereas
      precision values show an opposite trend. [SEP] Highlights are short
      sentences used to annotate scientific papers. They complement the abstract
      content by conveying the main result findings. To automate the process of
      paper annotation, highlights extraction aims at extracting from 3 to 5
      paper sentences via supervised learning. Existing approaches rely on ad
      hoc linguistic features, which depend on the analyzed context, and apply
      recurrent neural networks, which are not effective in learning long-range
      text dependencies. This paper leverages the attention mechanism adopted in
      transformer models to improve the accuracy of sentence relevance
      estimation. Unlike existing approaches, it relies on the end-to-end
      training of a deep regression model. To attend patterns relevant to
      highlights content it also enriches sentence encodings with a
      section-level contextualization. The experimental results, achieved on
      three different benchmark datasets, show that the designed architecture is
      able to achieve significant performance improvements compared to the
      state-of-the-art.

General Information

This model is trained on journal publications of belonging to the domain: Copmputer Science.

This is an allenai/scibert_scivocab_cased model trained in the scientific domain. The model is trained with regression objective to estimate the relevance of a sentence according to the provided context (e.g., the abstract of the scientific paper).

The model is used in the paper 'Transformer-based highlights extraction from scientific papers' published in Knowledge-Based Systems scientific journal. The model is able to achieve state-of-the-art performance in the task of highlights extraction from scientific papers.

Access to the full paper: here.

Usage:

For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .

References:

If you find it useful, please cite the following paper:

@article{thext,
  title={Transformer-based highlights extraction from scientific papers},
  author={La Quatra, Moreno and Cagliero, Luca},
  journal={Knowledge-Based Systems},
  pages={109382},
  year={2022},
  publisher={Elsevier}
}