File size: 5,381 Bytes
99fcf20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a0634a
 
99fcf20
9a0634a
 
 
 
99fcf20
 
 
 
 
 
 
 
4a58b7d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: question
    dtype: string
  - name: choices
    dtype: string
  - name: label
    dtype: int64
  - name: description
    dtype: string
  - name: id
    dtype: string
  splits:
  - name: valid
    num_bytes: 14944628.0
    num_examples: 300
  - name: test
    num_bytes: 14824677.0
    num_examples: 300
  download_size: 23613385
  dataset_size: 29769305.0
configs:
- config_name: default
  data_files:
  - split: valid
    path: data/valid-*
  - split: test
    path: data/test-*
---


# Dataset Card for ChemQA-Lite

Introducing ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning. This work is inspired by IsoBench[1] and ChemLLMBench[2].


## Content

This is a lite version of the full dataset [ChemQA](https://huggingface.co/datasets/shangzhu/ChemQA)

## Load the Dataset

```python
from datasets import load_dataset
dataset_valid = load_dataset('shangzhu/ChemQA-lite', split='valid')
dataset_test = load_dataset('shangzhu/ChemQA-lite', split='test')
```

## Reference

[1] Fu, D., Khalighinejad, G., Liu, O., Dhingra, B., Yogatama, D., Jia, R., & Neiswanger, W. (2024). IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations.

[2] Guo, T., Guo,  kehan, Nan, B., Liang, Z., Guo, Z., Chawla, N., Wiest, O., & Zhang, X. (2023). What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks. Advances in Neural Information Processing Systems (Vol. 36, pp. 59662–59688).

[3] Edwards, C., Lai, T., Ros, K., Honke, G., Cho, K., & Ji, H. (2022). Translation between Molecules and Natural Language. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 375–413.

[4] Irwin, R., Dimitriadis, S., He, J., & Bjerrum, E. J. (2022). Chemformer: a pre-trained transformer for computational chemistry. Machine Learning: Science and Technology, 3(1), 15022.



## Citation

```BibTeX
@misc{chemQA2024,
      title={ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning}, 
      author={Shang Zhu and Xuefeng Liu and Ghazal Khalighinejad},
      year={2024},
      publisher={Hugging Face},
      howpublished={\url{https://huggingface.co/datasets/shangzhu/ChemQA}},
}

@misc{zimmermann2024reflections2024largelanguage,
      title={Reflections from the 2024 Large Language Model (LLM) Hackathon for Applications in Materials Science and Chemistry}, 
      author={Yoel Zimmermann and Adib Bazgir and Zartashia Afzal and Fariha Agbere and Qianxiang Ai and Nawaf Alampara and Alexander Al-Feghali and Mehrad Ansari and Dmytro Antypov and Amro Aswad and Jiaru Bai and Viktoriia Baibakova and Devi Dutta Biswajeet and Erik Bitzek and Joshua D. Bocarsly and Anna Borisova and Andres M Bran and L. Catherine Brinson and Marcel Moran Calderon and Alessandro Canalicchio and Victor Chen and Yuan Chiang and Defne Circi and Benjamin Charmes and Vikrant Chaudhary and Zizhang Chen and Min-Hsueh Chiu and Judith Clymo and Kedar Dabhadkar and Nathan Daelman and Archit Datar and Matthew L. Evans and Maryam Ghazizade Fard and Giuseppe Fisicaro and Abhijeet Sadashiv Gangan and Janine George and Jose D. Cojal Gonzalez and Michael Götte and Ankur K. Gupta and Hassan Harb and Pengyu Hong and Abdelrahman Ibrahim and Ahmed Ilyas and Alishba Imran and Kevin Ishimwe and Ramsey Issa and Kevin Maik Jablonka and Colin Jones and Tyler R. Josephson and Greg Juhasz and Sarthak Kapoor and Rongda Kang and Ghazal Khalighinejad and Sartaaj Khan and Sascha Klawohn and Suneel Kuman and Alvin Noe Ladines and Sarom Leang and Magdalena Lederbauer and Sheng-Lun Mark Liao and Hao Liu and Xuefeng Liu and Stanley Lo and Sandeep Madireddy and Piyush Ranjan Maharana and Shagun Maheshwari and Soroush Mahjoubi and José A. Márquez and Rob Mills and Trupti Mohanty and Bernadette Mohr and Seyed Mohamad Moosavi and Alexander Moßhammer and Amirhossein D. Naghdi and Aakash Naik and Oleksandr Narykov and Hampus Näsström and Xuan Vu Nguyen and Xinyi Ni and Dana O'Connor and Teslim Olayiwola and Federico Ottomano and Aleyna Beste Ozhan and Sebastian Pagel and Chiku Parida and Jaehee Park and Vraj Patel and Elena Patyukova and Martin Hoffmann Petersen and Luis Pinto and José M. Pizarro and Dieter Plessers and Tapashree Pradhan and Utkarsh Pratiush and Charishma Puli and Andrew Qin and Mahyar Rajabi and Francesco Ricci and Elliot Risch and Martiño Ríos-García and Aritra Roy and Tehseen Rug and Hasan M Sayeed and Markus Scheidgen and Mara Schilling-Wilhelmi and Marcel Schloz and Fabian Schöppach and Julia Schumann and Philippe Schwaller and Marcus Schwarting and Samiha Sharlin and Kevin Shen and Jiale Shi and Pradip Si and Jennifer D'Souza and Taylor Sparks and Suraj Sudhakar and Leopold Talirz and Dandan Tang and Olga Taran and Carla Terboven and Mark Tropin and Anastasiia Tsymbal and Katharina Ueltzen and Pablo Andres Unzueta and Archit Vasan and Tirtha Vinchurkar and Trung Vo and Gabriel Vogel and Christoph Völker and Jan Weinreich and Faradawn Yang and Mohd Zaki and Chi Zhang and Sylvester Zhang and Weijie Zhang and Ruijie Zhu and Shang Zhu and Jan Janssen and Ian Foster and Ben Blaiszik},
      year={2024},
      eprint={2411.15221},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2411.15221}, 
}
```


## Contact

shangzhu@umich.edu