Datasets:

ArXiv:
License:
scicap / README.md
shaurya0512's picture
- edits
fd8abb5
metadata
license: cc-by-nc-sa-4.0

The 1st Scientific Figure Captioning (SciCap) Challenge πŸ“–πŸ“Š

Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! πŸŽ‰ This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals. πŸ’»

Challenge homepage 🏠

Challenge Overview 🌟

The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France) πŸ‡«πŸ‡·. Participants are required to submit the generated captions for a hidden test set for evaluation.

The challenge is divided into two phases:

  • Test Phase (2.5 months): Use the provided training set, validation set, and public test set to build and test the models.
  • Challenge Phase (2 weeks): Submit results for a hidden test set that will be released before the submission deadline.

Winning teams will be determined based on their results for the hidden test set πŸ†. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage.

Dataset Overview and Download πŸ“š

The SciCap dataset contains an expanded version of the original SciCap dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics πŸ“Š. Additionally, it covers data from ACL Anthology papers ACL-Fig.

You can download the dataset using the following command:

from huggingface_hub import snapshot_download
snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset') 

Merge all image split files into one 🧩

zip -F img-split.zip --out img.zip

The dataset schema is similar to the mscoco dataset:

  • images: two separated folders - arXiv and acl figures πŸ“
  • annotations: JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions) πŸ“

Evaluation and Submission πŸ“©

You have to submit your generated captions in JSON format as shown below:

[
  {
    "image_id": int, 
    "caption": "PREDICTED CAPTION STRING"
  },
  {
    "image_id": int,
    "caption": "PREDICTED CAPTION STRING"
  }
...
]

Submit your results using this challenge link πŸ”—. Participants must register on Eval.AI to access the leaderboard and submit results.

Please note: Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems ⚠️.

Technical Report Submission πŸ—’οΈ

All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template πŸ“„. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language.

Good luck with your participation in the 1st SciCap Challenge! πŸ€πŸŽŠ