Datasets:

ArXiv:
License:
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Not able to read records in the JSON file at hf://datasets/CrowdAILab/scicap@60e504baa94423f63cda87d5442e73a696b953d3/train-acl.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['images', 'annotations']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2215, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1388, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 169, in _generate_tables
                  raise ValueError(
              ValueError: Not able to read records in the JSON file at hf://datasets/CrowdAILab/scicap@60e504baa94423f63cda87d5442e73a696b953d3/train-acl.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['images', 'annotations']. Select the correct one and provide it as `field='XXX'` to the dataset loading method.

Need help to make the dataset viewer work? Open a discussion for direct support.

The 1st Scientific Figure Captioning (SciCap) Challenge πŸ“–πŸ“Š

Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! πŸŽ‰ This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals. πŸ’»

Challenge homepage 🏠

Challenge Overview 🌟

The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France) πŸ‡«πŸ‡·. Participants are required to submit the generated captions for a hidden test set for evaluation.

The challenge is divided into two phases:

  • Test Phase (2.5 months): Use the provided training set, validation set, and public test set to build and test the models.
  • Challenge Phase (2 weeks): Submit results for a hidden test set that will be released before the submission deadline.

Winning teams will be determined based on their results for the hidden test set πŸ†. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage.

Dataset Overview and Download πŸ“š

The SciCap dataset contains an expanded version of the original SciCap dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics πŸ“Š. Additionally, it covers data from ACL Anthology papers ACL-Fig.

You can download the dataset using the following command:

from huggingface_hub import snapshot_download
snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset') 

Merge all image split files into one 🧩

zip -F img-split.zip --out img.zip

The dataset schema is similar to the mscoco dataset:

  • images: two separated folders - arXiv and acl figures πŸ“
  • annotations: JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions) πŸ“

Evaluation and Submission πŸ“©

You have to submit your generated captions in JSON format as shown below:

[
  {
    "image_id": int, 
    "caption": "PREDICTED CAPTION STRING"
  },
  {
    "image_id": int,
    "caption": "PREDICTED CAPTION STRING"
  }
...
]

Submit your results using this challenge link πŸ”—. Participants must register on Eval.AI to access the leaderboard and submit results.

Please note: Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems ⚠️.

Technical Report Submission πŸ—’οΈ

All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template πŸ“„. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language.

Good luck with your participation in the 1st SciCap Challenge! πŸ€πŸŽŠ

Downloads last month
3
Edit dataset card