unarXive_citrec / README.md
saier's picture
add information to dataset card
df769ff
metadata
annotations_creators:
  - machine-generated
language:
  - en
language_creators:
  - found
license:
  - cc-by-sa-4.0
multilinguality:
  - monolingual
pretty_name: unarXive citation recommendation
size_categories:
  - 1M<n<10M
tags:
  - arXiv.org
  - arXiv
  - citation recommendation
  - citation
  - reference
  - publication
  - paper
  - preprint
  - section
  - physics
  - mathematics
  - computer science
  - cs
task_categories:
  - text-classification
task_ids:
  - multi-class-classification
source_datasets:
  - extended|10.5281/zenodo.7752615
dataset_info:
  features:
    - name: _id
      dtype: string
    - name: text
      dtype: string
    - name: marker
      dtype: string
    - name: marker_offsets
      sequence:
        sequence: int64
    - name: label
      dtype: string
  config_name: .
  splits:
    - name: train
      num_bytes: 5457336094
      num_examples: 2043192
    - name: test
      num_bytes: 551012459
      num_examples: 225084
    - name: validation
      num_bytes: 586422261
      num_examples: 225348
  download_size: 7005370567
  dataset_size: 6594770814

Dataset Card for unarXive citation recommendation

Dataset Description

Dataset Summary

The unarXive citation recommendation dataset contains 2.5 Million paragraphs from computer science papers and with an annotated citation marker. The paragraphs and citation information is derived from unarXive.

Note that citation infromation is only given as the OpenAlex ID of the cited paper. An important consideration for models is therefore if the data is used as is, or if additional information of the cited papers (metadata, abstracts, full-text, etc.) is used.

The dataset can be used as follows.

from datasets import load_dataset

citrec_data = load_dataset('saier/unarXive_citrec')
citrec_data = citrec_data.class_encode_column('label')  # assign target label column
citrec_data = citrec_data.remove_columns('_id')         # remove sample ID column

Dataset Structure

Data Instances

Each data instance contains the paragraph’s text as well as information on one of the contained citation markers, in the form of a label (cited document OpenAlex ID), citation marker, and citation marker offset. An example is shown below.

{'_id': '7c1464bb-1f0f-4b38-b1a3-85754eaf6ad1',
 'label': 'https://openalex.org/W3115081393',
 'marker': '[1]',
 'marker_offsets': [[316, 319]],
 'text': 'Data: For sentiment analysis on Hindi-English CM tweets, we used the '
         'dataset provided by the organizers of Task 9 at SemEval-2020.\n'
         'The training dataset consists of 14 thousand tweets.\n'
         'Whereas, the validation dataset as well as the test dataset contain '
         '3 thousand tweets each.\n'
         'The details of the dataset are given in [1]}.\n'
         'For this task, we did not use any external dataset.\n'}

Data Splits

The data is split into training, development, and testing data as follows.

  • Training: 2,043,192 instances
  • Development: 225,084 instances
  • Testing: 225,348 instances

Dataset Creation

Source Data

The paragraph texts are extracted from the data set unarXive.

Who are the source language producers?

The paragraphs were written by the authors of the arXiv papers. In file license_info.jsonl author and text licensing information can be found for all samples, An example is shown below.


{'authors': 'Yusuke Sekikawa, Teppei Suzuki',
 'license': 'http://creativecommons.org/licenses/by/4.0/',
 'paper_arxiv_id': '2011.09852',
 'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8',
                '18dc073e-a48e-488e-b34c-e5fc3cb8a4ca',
                '0c2e89b3-d863-4bc2-9e11-8f6c48d867cb',
                'd85e46cf-b11d-49b6-801b-089aa2dd037d',
                '92915cea-17ab-4a98-aad2-417f6cdd53d2',
                'e88cb422-47b7-4f69-9b0b-fbddf8140d98',
                '4f5094a4-0e6e-46ae-a34d-e15ce0b9803c',
                '59003494-096f-4a7c-ad65-342b74eed561',
                '6a99b3f5-217e-4d3d-a770-693483ef8670']}

Annotations

Citation information in unarXive is automatically determined (see implementation).

Additional Information

Licensing information

The dataset is released under the Creative Commons Attribution-ShareAlike 4.0.

Citation Information

@inproceedings{Saier2023unarXive,
  author        = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael},
  title         = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}},
  booktitle     = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries},
  year          = {2023},
  series        = {JCDL '23}
}