File size: 3,213 Bytes
4a73246
0d5d234
 
4a73246
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f9c22e
34c0ad1
86802fc
f335f31
e1e0fff
3f9c22e
f335f31
e1e0fff
3f9c22e
e1e0fff
 
4a73246
 
 
3f9c22e
 
f335f31
 
 
 
4a73246
9a5abfa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
language:
- en
dataset_info:
  features:
  - name: image
    dtype: image
  - name: question
    dtype: string
  - name: choices
    dtype: string
  - name: label
    dtype: int64
  - name: description
    dtype: string
  - name: id
    dtype: string
  splits:
  - name: train
    num_bytes: 705885259.25
    num_examples: 66166
  - name: valid
    num_bytes: 100589192.25
    num_examples: 9486
  - name: test
    num_bytes: 100021131.0
    num_examples: 9480
  download_size: 866619578
  dataset_size: 906495582.5
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: valid
    path: data/valid-*
  - split: test
    path: data/test-*
---

# Dataset Card for ChemQA

Introducing ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning. This work is inspired by IsoBench[1] and ChemLLMBench[2].


## Content

There are 5 QA Tasks in total: 
* Counting Numbers of Carbons and Hydrogens in Organic Molecules: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets. 
* Calculating Molecular Weights in Organic Molecules: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets.
* Name Conversion: From SMILES to IUPAC: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets.
* Molecule Captioning and Editing: inspired by [2], adapted from dataset provided in [3], following the same training, validation and evaluation splits.
* Retro-synthesis Planning:  inspired by [2], adapted from dataset provided in [4], following the same training, validation and evaluation splits.

## Load the Dataset

```python
from datasets import load_dataset
dataset_train = load_dataset('shangzhu/ChemQA', split='train')
dataset_valid = load_dataset('shangzhu/ChemQA', split='valid')
dataset_test = load_dataset('shangzhu/ChemQA', split='test')
```

## Reference

[1] Fu, D., Khalighinejad, G., Liu, O., Dhingra, B., Yogatama, D., Jia, R., & Neiswanger, W. (2024). IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations.

[2] Guo, T., Guo,  kehan, Nan, B., Liang, Z., Guo, Z., Chawla, N., Wiest, O., & Zhang, X. (2023). What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks. Advances in Neural Information Processing Systems (Vol. 36, pp. 59662–59688).

[3] Edwards, C., Lai, T., Ros, K., Honke, G., Cho, K., & Ji, H. (2022). Translation between Molecules and Natural Language. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 375–413.

[4] Irwin, R., Dimitriadis, S., He, J., & Bjerrum, E. J. (2022). Chemformer: a pre-trained transformer for computational chemistry. Machine Learning: Science and Technology, 3(1), 15022.



## Citation

```BibTeX
@misc{chemQA2024,
      title={ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning}, 
      author={Shang Zhu and Xuefeng Liu and Ghazal Khalighinejad},
      year={2024},
      publisher={Hugging Face},
      howpublished={\url{https://huggingface.co/datasets/shangzhu/ChemQA}},
}
```


## Contact

shangzhu@umich.edu