
Datasets:
The dataset preview is not available for this dataset.
Error code: SplitsNamesError Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 388, in get_dataset_config_info for split_generator in builder._split_generators( File "/tmp/modules-cache/datasets_modules/datasets/allenai--scico/6a3b54161642678584e0027790ee47c47023731c288d5af9554fc62932af8448/scico.py", line 66, in _split_generators data_dir = dl_manager.download_and_extract(_DATA_URL) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1074, in download_and_extract return self.extract(self.download(url_or_urls)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1026, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 436, in map_nested return function(data_struct) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1036, in _extract raise NotImplementedError( NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/allenai/scico/resolve/main/./data.tar' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. Example usage: url = dl_manager.download(url) tar_archive_iterator = dl_manager.iter_archive(url) for filename, file in tar_archive_iterator: ... The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 119, in compute_splits_response split_items = get_dataset_split_full_names(dataset=dataset, use_auth_token=use_auth_token) File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 76, in get_dataset_split_full_names return [ File "/src/workers/datasets_based/src/datasets_based/workers/splits.py", line 79, in <listcomp> for split in get_dataset_split_names(path=dataset, config_name=config, use_auth_token=use_auth_token) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 442, in get_dataset_split_names info = get_dataset_config_info( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 393, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Open an discussion for direct support.
Dataset Card for SciCo
Dataset Summary
SciCo consists of clusters of mentions in context and a hierarchy over them. The corpus is drawn from computer science papers, and the concept mentions are methods and tasks from across CS. Scientific concepts pose significant challenges: they often take diverse forms (e.g., class-conditional image synthesis and categorical image generation) or are ambiguous (e.g., network architecture in AI vs. systems research). To build SciCo, we develop a new candidate generation approach built on three resources: a low-coverage KB (https://paperswithcode.com/), a noisy hypernym extractor, and curated candidates.
Supported Tasks and Leaderboards
Languages
The text in the dataset is in English.
Dataset Structure
Data Instances
Data Fields
flatten_tokens
: a single list of all tokens in the topicflatten_mentions
: array of mentions, each mention is represented by [start, end, cluster_id]tokens
: array of paragraphsdoc_ids
: doc_id of each paragraph intokens
metadata
: metadata of each doc_idsentences
: sentences boundaries for each paragraph intokens
[start, end]mentions
: array of mentions, each mention is represented by [paragraph_id, start, end, cluster_id]relations
: array of binary relations between cluster_ids [parent, child]id
: id of the topichard_10
andhard_20
(only in the test set): flag for 10% or 20% hardest topics based on Levenshtein similarity.source
: source of this topic PapersWithCode (pwc), hypernym or curated.
Data Splits
Train | Validation | Test | |
---|---|---|---|
Topic | 221 | 100 | 200 |
Documents | 9013 | 4120 | 8237 |
Mentions | 10925 | 4874 | 10424 |
Clusters | 4080 | 1867 | 3711 |
Relations | 2514 | 1747 | 2379 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
This dataset was initially created by Arie Cattan, Sophie Johnson, Daniel Weld, Ido Dagan, Iz Beltagy, Doug Downey and Tom Hope, while Arie was intern at Allen Institute of Artificial Intelligence.
Licensing Information
This dataset is distributed under Apache License 2.0.
Citation Information
@inproceedings{
cattan2021scico,
title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
booktitle={3rd Conference on Automated Knowledge Base Construction},
year={2021},
url={https://openreview.net/forum?id=OFLbgUP04nC}
}
Contributions
Thanks to @ariecattan for adding this dataset.
- Downloads last month
- 113
Models trained or fine-tuned on allenai/scico
