--- license: cc-by-nc-sa-4.0 --- # The 1st Scientific Figure Captioning (SciCap) Challenge 📖📊 Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! 🎉 This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals. 💻 *Challenge [homepage](http://SciCap.AI) 🏠* ## Challenge Overview 🌟 The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France) 🇫🇷. Participants are required to submit the generated captions for a hidden test set for evaluation. The challenge is divided into two phases: - **Test Phase (2.5 months):** Use the provided training set, validation set, and public test set to build and test the models. - **Challenge Phase (2 weeks):** Submit results for a hidden test set that will be released before the submission deadline. Winning teams will be determined based on their results for the hidden test set 🏆. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage. ## Dataset Overview and Download 📚 The SciCap dataset contains an expanded version of the [original SciCap](https://aclanthology.org/2021.findings-emnlp.277.pdf) dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics 📊. Additionally, it covers data from ACL Anthology papers [ACL-Fig](https://arxiv.org/pdf/2301.12293.pdf). You can download the dataset using the following command: ```python from huggingface_hub import snapshot_download snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset') ``` _Merge all image split files into one_ 🧩 ``` zip -F img-split.zip --out img.zip ``` The dataset schema is similar to the `mscoco` dataset: - **images:** two separated folders - arXiv and acl figures 📁 - **annotations:** JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions) 📝 ## Evaluation and Submission 📩 You have to submit your generated captions in JSON format as shown below: ```json [ { "image_id": int, "caption": "PREDICTED CAPTION STRING" }, { "image_id": int, "caption": "PREDICTED CAPTION STRING" } ... ] ``` Submit your results using this [challenge link](https://eval.ai/web/challenges/challenge-page/2012/overview) 🔗. Participants must register on [Eval.AI](http://Eval.AI) to access the leaderboard and submit results. **Please note:** Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems ⚠️. ## Technical Report Submission 🗒️ All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template 📄. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language. Good luck with your participation in the 1st SciCap Challenge! 🍀🎊