SciMMIR / README.md
yizhilll's picture
Update README.md (#4)
ec7f526 verified
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: file_name_index
dtype: string
- name: text
dtype: string
- name: class
dtype: string
- name: super_class
dtype: string
- name: sub_class
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 59242453844.635
num_examples: 498279
- name: validation
num_bytes: 1783636593.843
num_examples: 16433
- name: test
num_bytes: 1874022111.346
num_examples: 16263
download_size: 63729889852
dataset_size: 62900112549.824005
---
# Dataset Card for "SciMMIR_dataset"
## SciMMIR
This is the repo for the paper [SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval](https://arxiv.org/abs/2401.13478).
<div align="center">
<img src=./imgs/Framework.png width=80% />
</div>
In this paper, we propose a novel SciMMIR benchmark and a corresponding dataset designed to address the gap in evaluating multi-modal information retrieval (MMIR) models in the scientific domain.
It is worth mentioning that we define a data hierarchical architecture of "Two subsets, Five subcategories" and use human-created keywords to classify the data (as shown in the table below).
<div align="center">
<img src=./imgs/data_architecture.png width=50% />
</div>
As shown in the table below, we conducted extensive baselines (both fine-tuning and zero-shot) within various subsets and subcategories.
![main_result](./imgs/main_result.png)
For more detailed experimental results and analysis, please refer to our paper [SciMMIR](https://arxiv.org/abs/2401.13478).
## Dataset
Our SciMMIR benchmark dataset used in this paper contains 537K scientific image-text pairs which are extracted from the latest 6 months' papers in Arxiv (2023.05 to 2023.10), and we will continue to expand this data by extracting data from more papers in Arxiv and provide larger versions of the dataset.
The datasets can be obtained from huggingface Datasets [m-a-p/SciMMIR](https://huggingface.co/datasets/m-a-p/SciMMIR), and the following codes show how to use it:
```python
import datasets
ds_remote = datasets.load_dataset("m-a-p/SciMMIR")
test_data = ds_remote['test']
caption = test_data[0]['text']
image_type = test_data[0]['class']
image = test_data[0]['image']
```
## Codes
The codes of this paper can be found in our [Github](https://github.com/Wusiwei0410/SciMMIR)
## Potential TODOs before ACL
**TODO**: case study table
**TODO**: statistics of the paper fields (perhaps in appendix)
**TODO**: See if it's possible to further divide the "Figure Results" subsets.
## Citation
```
@misc{wu2024scimmir,
title={SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval},
author={Siwei Wu and Yizhi Li and Kang Zhu and Ge Zhang and Yiming Liang and Kaijing Ma and Chenghao Xiao and Haoran Zhang and Bohao Yang and Wenhu Chen and Wenhao Huang and Noura Al Moubayed and Jie Fu and Chenghua Lin},
year={2024},
eprint={2401.13478},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)