Datasets:

ArXiv:
License:
shaurya0512 commited on
Commit
20a87e1
β€’
1 Parent(s): e485539

- added details

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -1,25 +1,25 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
- # The 1st Scientific Figure Captioning (SciCap) Challenge
5
 
6
- Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals.
7
 
8
- *Challenge [homepage](http://SciCap.AI)*
9
 
10
- ## Challenge Overview
11
 
12
- The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France). Participants are required to submit the generated captions for a hidden test set for evaluation.
13
 
14
  The challenge is divided into two phases:
15
  - **Test Phase (2.5 months):** Use the provided training set, validation set, and public test set to build and test the models.
16
  - **Challenge Phase (2 weeks):** Submit results for a hidden test set that will be released before the submission deadline.
17
 
18
- Winning teams will be determined based on their results for the hidden test set. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage.
19
 
20
- ## Dataset Overview and Download
21
 
22
- The SciCap dataset contains an expanded version of the [original SciCap](https://aclanthology.org/2021.findings-emnlp.277.pdf) dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics. Additionally, it covers data from ACL Anthology papers [ACL-Fig](https://arxiv.org/pdf/2301.12293.pdf).
23
 
24
  You can download the dataset using the following command:
25
 
@@ -28,17 +28,17 @@ from huggingface_hub import snapshot_download
28
  snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset')
29
  ```
30
 
31
- _Merge all image split files into one_
32
  ```
33
  zip -F img-split.zip --out img.zip
34
  ```
35
 
36
  The dataset schema is similar to the `mscoco` dataset:
37
 
38
- - **images:** two separated folders - arXiv and acl figures
39
- - **annotations:** JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions)
40
 
41
- ## Evaluation and Submission
42
 
43
  You have to submit your generated captions in JSON format as shown below:
44
 
@@ -56,12 +56,12 @@ You have to submit your generated captions in JSON format as shown below:
56
  ]
57
  ```
58
 
59
- Submit your results using this [challenge link](https://eval.ai/web/challenges/challenge-page/2012/overview). Participants must register on [Eval.AI](http://Eval.AI) to access the leaderboard and submit results.
60
 
61
- **Please note:** Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems.
62
 
63
- ## Technical Report Submission
64
 
65
- All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language.
66
 
67
- Good luck with your participation in the 1st SciCap Challenge!
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+ # HuggingFace Dataset Readme: The 1st Scientific Figure Captioning (SciCap) Challenge πŸ“–πŸ“Š
5
 
6
+ Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! πŸŽ‰ This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals. πŸ’»
7
 
8
+ *Challenge [homepage](http://SciCap.AI) 🏠*
9
 
10
+ ## Challenge Overview 🌟
11
 
12
+ The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France) πŸ‡«πŸ‡·. Participants are required to submit the generated captions for a hidden test set for evaluation.
13
 
14
  The challenge is divided into two phases:
15
  - **Test Phase (2.5 months):** Use the provided training set, validation set, and public test set to build and test the models.
16
  - **Challenge Phase (2 weeks):** Submit results for a hidden test set that will be released before the submission deadline.
17
 
18
+ Winning teams will be determined based on their results for the hidden test set πŸ†. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage.
19
 
20
+ ## Dataset Overview and Download πŸ“š
21
 
22
+ The SciCap dataset contains an expanded version of the [original SciCap](https://aclanthology.org/2021.findings-emnlp.277.pdf) dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics πŸ“Š. Additionally, it covers data from ACL Anthology papers [ACL-Fig](https://arxiv.org/pdf/2301.12293.pdf).
23
 
24
  You can download the dataset using the following command:
25
 
 
28
  snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset')
29
  ```
30
 
31
+ _Merge all image split files into one_ 🧩
32
  ```
33
  zip -F img-split.zip --out img.zip
34
  ```
35
 
36
  The dataset schema is similar to the `mscoco` dataset:
37
 
38
+ - **images:** two separated folders - arXiv and acl figures πŸ“
39
+ - **annotations:** JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions) πŸ“
40
 
41
+ ## Evaluation and Submission πŸ“©
42
 
43
  You have to submit your generated captions in JSON format as shown below:
44
 
 
56
  ]
57
  ```
58
 
59
+ Submit your results using this [challenge link](https://eval.ai/web/challenges/challenge-page/2012/overview) πŸ”—. Participants must register on [Eval.AI](http://Eval.AI) to access the leaderboard and submit results.
60
 
61
+ **Please note:** Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems ⚠️.
62
 
63
+ ## Technical Report Submission πŸ—’οΈ
64
 
65
+ All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template πŸ“„. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language.
66
 
67
+ Good luck with your participation in the 1st SciCap Challenge! πŸ€πŸŽŠ