Edit model card

This repo contains a low-rank adapter for LLaMA2-7b-chat, trained on the πŸ”¬ molecule-oriented instructions from the πŸ§ͺ Mol-Instructions dataset.

Instructions for running it can be found at https://github.com/zjunlp/Mol-Instructions.

Please refer to our paper for more details.


πŸ”¬ Tasks

Molecule description generation
  • Please give me some details about this molecule: [C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][=Branch1][C][=O][O][C@H1][Branch2][Ring1][=Branch1][C][O][C][=Branch1][C][=O][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][O][P][=Branch1][C][=O][Branch1][C][O][O][C][C@@H1][Branch1][=Branch1][C][=Branch1][C][=O][O][N]

    The molecule is a 3-sn-phosphatidyl-L-serine in which the phosphatidyl acyl groups at positions 1 and 2 are specified as stearoyl and arachidonoyl respectively. 
    It is functionally related to an arachidonic acid and an octadecanoic acid.
Description-guided molecule design
  • Create a molecule with the structure as the one described: The molecule is a primary arylamine in which an amino functional group is substituted for one of the benzene hydrogens. It is a primary arylamine and a member of anilines.

Forward reaction prediction
  • With the provided reactants and reagents, propose a potential product: [O][=N+1][Branch1][C][O-1][C][=C][N][=C][Branch1][C][Cl][C][Branch1][C][I][=C][Ring1][Branch2].[Fe]

  • Please suggest potential reactants used in the synthesis of the provided product: [C][=C][C][C][N][C][=Branch1][C][=O][O][C][Branch1][C][C][Branch1][C][C][C]

Reagent prediction
  • Please provide possible reagents based on the following chemical reaction: [C][C][=C][C][=C][Branch1][C][N][C][=N][Ring1][#Branch1].[O][=C][Branch1][C][Cl][C][Cl]>>[C][C][=C][C][=C][Branch1][Branch2][N][C][=Branch1][C][=O][C][Cl][C][=N][Ring1][O]

Property prediction
  • Please provide the HOMO energy value for this molecule: [C][C][O][C][C][Branch1][C][C][C][Branch1][C][C][C]


πŸ“ Demo

As illustrated in our repository, we provide an example to perform generation.

>> python generate.py \
    --CLI True \
    --protein False\
    --load_8bit \
    --base_model $BASE_MODEL_PATH \
    --lora_weights $FINETUNED_MODEL_PATH \

Please download Llama-2-7b-chat to obtain the pre-training weights of LlamA-2-7b-chat, refine the --base_model to point towards the location where the model weights are saved.

For model fine-tuned on molecule-oriented instructions, set $FINETUNED_MODEL_PATH to 'zjunlp/llama2-molinst-molecule-7b'.

🚨 Limitations

The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited.

πŸ“š References

If you use our repository, please cite the following related paper:
  title={Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models},
  author={Fang, Yin and Liang, Xiaozhuan and Zhang, Ningyu and Liu, Kangwei and Huang, Rui and Chen, Zhuo and Fan, Xiaohui and Chen, Huajun},
  journal={arXiv preprint arXiv:2306.08018},

πŸ«±πŸ»β€πŸ«²πŸΎ Acknowledgements

We appreciate LLaMA-2, LLaMA, Huggingface Transformers Llama, Alpaca, Alpaca-LoRA, Chatbot Service and many other related works for their open-source contributions.

Downloads last month
Unable to determine this model's library. Check the docs .