|
--- |
|
base_model: Qwen/Qwen2-7B-Instruct |
|
tags: |
|
- Text-Graph-to-Text |
|
- chemistry |
|
- material science |
|
- molecular design |
|
language: |
|
- en |
|
pipeline_tag: graph-ml |
|
library_name: peft |
|
datasets: |
|
- liuganghuggingface/Llamole-MolQA |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
The adapter fine-tuned for Llamole (Multimodal Large Language Model for Molecular Discovery) |
|
|
|
## Model Sources [optional] |
|
|
|
- **Repository:** https://github.com/liugangcode/Llamole |
|
- **Paper:** [Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning](https://arxiv.org/abs/2410.04223) |
|
- **Demo:** Coming soon |
|
|
|
## Training Details |
|
|
|
Coming soon |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |