Dataset Preview Go to dataset viewer
The dataset preview is not available for this dataset.
Cannot get the split names for the dataset.
Error code:   SplitsNamesError
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 376, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/tmp/modules-cache/datasets_modules/datasets/yuningm--citesum/da9438a3f4721d4da04126e2e37f70e3881e63f2d4714eaf61a6170f342ebf95/", line 98, in _split_generators
                  dl_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 946, in download_and_extract
                  return self.extract(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 909, in extract
                  urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 420, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 914, in _extract
                  protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 401, in _get_extraction_protocol
                  with, **kwargs) as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 441, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 195, in __getitem__
                  out = super().__getitem__(item)
              IndexError: list index out of range
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 79, in get_splits_response
                  split_full_names = get_dataset_split_full_names(dataset, hf_token)
                File "/src/services/worker/src/worker/responses/", line 39, in get_dataset_split_full_names
                  return [
                File "/src/services/worker/src/worker/responses/", line 42, in <listcomp>
                  for split in get_dataset_split_names(dataset, config, use_auth_token=hf_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 426, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 381, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Error: "languages" is deprecated. Use "language" instead.



CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation.

CiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR.




Yuning Mao, Ming Zhong, Jiawei Han

University of Illinois Urbana-Champaign

{yuningm2, mingz5, hanj}

Dataset size

Train: 83304
Validation: 4721
Test: 4921

Data details

  • src (string): source text. long description of paper
  • tgt (string): target text. tldr of paper
  • paper_id (string): unique id for the paper
  • title (string): title of the paper
  • discipline (dict):
    • venue (string): Where the paper was published (conference)
    • journal (string): Journal in which the paper was published
    • mag_field_of_study (list[str]): scientific fields that the paper falls under.


    'src': 'We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.',

    'tgt': 'A convolutional neural network model for predicting hashtags was proposed in REF .',

    'paper_id': '14697143',

    'title': '#TagSpace: Semantic Embeddings from Hashtags',

    'discipline': {
        'venue': 'EMNLP',
        'journal': None,
        'mag_field_of_study': ['Computer Science']

Using the dataset

from datasets import load_dataset

ds = load_dataset("yuningm/citesum")

Data location

Models trained or fine-tuned on yuningm/citesum