Dataset Preview Go to dataset viewer
The dataset preview is not available for this dataset.
Cannot get the split names for the dataset.
Error code:   SplitsNamesError
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 388, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/tmp/modules-cache/datasets_modules/datasets/allenai--scico/6a3b54161642678584e0027790ee47c47023731c288d5af9554fc62932af8448/", line 66, in _split_generators
                  data_dir = dl_manager.download_and_extract(_DATA_URL)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/", line 1074, in download_and_extract
                  return self.extract(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/", line 1026, in extract
                  urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/", line 436, in map_nested
                  return function(data_struct)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/", line 1036, in _extract
                  raise NotImplementedError(
              NotImplementedError: Extraction protocol for TAR archives like '' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
              Example usage:
              	url =
              	tar_archive_iterator = dl_manager.iter_archive(url)
              	for filename, file in tar_archive_iterator:
              The above exception was the direct cause of the following exception:
              Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 119, in compute_splits_response
                  split_items = get_dataset_split_full_names(dataset=dataset, use_auth_token=use_auth_token)
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 76, in get_dataset_split_full_names
                  return [
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 79, in <listcomp>
                  for split in get_dataset_split_names(path=dataset, config_name=config, use_auth_token=use_auth_token)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 442, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 393, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for SciCo

Dataset Summary

SciCo consists of clusters of mentions in context and a hierarchy over them. The corpus is drawn from computer science papers, and the concept mentions are methods and tasks from across CS. Scientific concepts pose significant challenges: they often take diverse forms (e.g., class-conditional image synthesis and categorical image generation) or are ambiguous (e.g., network architecture in AI vs. systems research). To build SciCo, we develop a new candidate generation approach built on three resources: a low-coverage KB (, a noisy hypernym extractor, and curated candidates.

Supported Tasks and Leaderboards

More Information Needed


The text in the dataset is in English.

Dataset Structure

Data Instances

More Information Needed

Data Fields

  • flatten_tokens: a single list of all tokens in the topic
  • flatten_mentions: array of mentions, each mention is represented by [start, end, cluster_id]
  • tokens: array of paragraphs
  • doc_ids: doc_id of each paragraph in tokens
  • metadata: metadata of each doc_id
  • sentences: sentences boundaries for each paragraph in tokens [start, end]
  • mentions: array of mentions, each mention is represented by [paragraph_id, start, end, cluster_id]
  • relations: array of binary relations between cluster_ids [parent, child]
  • id: id of the topic
  • hard_10 and hard_20 (only in the test set): flag for 10% or 20% hardest topics based on Levenshtein similarity.
  • source: source of this topic PapersWithCode (pwc), hypernym or curated.

Data Splits

Train Validation Test
Topic 221 100 200
Documents 9013 4120 8237
Mentions 10925 4874 10424
Clusters 4080 1867 3711
Relations 2514 1747 2379

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed


Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

Additional Information

Dataset Curators

This dataset was initially created by Arie Cattan, Sophie Johnson, Daniel Weld, Ido Dagan, Iz Beltagy, Doug Downey and Tom Hope, while Arie was intern at Allen Institute of Artificial Intelligence.

Licensing Information

This dataset is distributed under Apache License 2.0.

Citation Information

    title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
    author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
    booktitle={3rd Conference on Automated Knowledge Base Construction},


Thanks to @ariecattan for adding this dataset.

Downloads last month

Models trained or fine-tuned on allenai/scico