|
--- |
|
license: mit |
|
task_categories: |
|
- text-generation |
|
- translation |
|
tags: |
|
- chemistry |
|
- biology |
|
--- |
|
|
|
# ChEBI-20-MM Dataset |
|
|
|
## Overview |
|
|
|
The ChEBI-20-MM is an extensive and multi-modal benchmark developed from the ChEBI-20 dataset. It is designed to provide a comprehensive benchmark for evaluating various models' capabilities in the field of molecular science. This benchmark integrates multi-modal data, including InChI, IUPAC, SELFIES, and images, making it a versatile tool for a wide range of molecular tasks. |
|
|
|
## Dataset Description |
|
|
|
ChEBI-20-MM is an expansion of the original ChEBI-20 dataset, with a focus on incorporating diverse modalities of molecular data. This benchmark is tailored to assess models in several key areas: |
|
|
|
- **Molecule Generation**: Evaluating the ability of models to generate accurate molecular structures. |
|
- **Image Recognition**: Testing models on their proficiency in converting molecular images into other representational formats. |
|
- **IUPAC Recognition**: Evaluating the ability of models to generate IUPAC names from other representational formats. |
|
- **Molecular Captioning**: Assessing the capability of models to generate descriptive captions for molecular structures. |
|
- **Retrieval Tasks**: Measuring the effectiveness of models in retrieving molecular information accurately and efficiently. |
|
|
|
## Utility and Significance |
|
|
|
By expanding the data modality variety, this benchmark enables a more comprehensive evaluation of models' performance in multi-modal data handling. |
|
|
|
## How to Use |
|
|
|
Model reviews and evaluations related to this dataset can be directly accessed and used via the LLM4Mol link: [LLM4Mol](https://github.com/AI-HPC-Research-Team/LLM4Mol). |
|
|
|
## Acknowledgments |
|
|
|
The development of the ChEBI-20-MM dataset was inspired by the ChEBI-20 in molecule generation and captioning initiated by MolT5. Additional data information supplements are derived from PubChem. |
|
|