File size: 2,489 Bytes
45f1100 056e2b8 210e683 056e2b8 8d9caa8 056e2b8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: apache-2.0
---
## ICMA version of galactica-125M for molecule captioning task (Mol2Cap) for paper "Large Language Models are In-Context Molecule Learners"
#### Notice: The input should contain 2 context examples and the cutoff length should be set to 2048 to ensure best performance.
A simple inference example:
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("phenixace/ICMA-Galactica-125M-M2C")
from transformers import AutoTokenizer
tk = AutoTokenizer.from_pretrained("phenixace/ICMA-Galactica-125M-M2C")
from transformers import GenerationConfig
text = """Generate a caption for the molecule: C[C@]12CCC(=O)C=C1CC[C@@H]3[C@@H]2C(=O)C[C@]4([C@H]3CCC4=O)C
Caption: The molecule is a 3-oxo Delta(4)-steroid that is androst-4-ene carrying three oxo-substituents at positions 3, 11 and 17. It has a role as an androgen, a human urinary metabolite, a marine metabolite and an EC 1.1.1.146 (11beta-hydroxysteroid dehydrogenase) inhibitor. It is a 3-oxo-Delta(4) steroid, a 17-oxo steroid, an androstanoid and an 11-oxo steroid. It derives from a hydride of an androstane.
Generate a caption for the molecule: C[C@]12CCC(=O)C=C1CC[C@@H]3[C@@H]2C(=O)C[C@]4([C@H]3CC[C@@H]4C(=O)CO)C
Caption: The molecule is an 11-oxo steroid that is corticosterone in which the hydroxy substituent at the 11beta position has been oxidised to give the corresponding ketone. It has a role as a human metabolite and a mouse metabolite. It is a 21-hydroxy steroid, a 3-oxo-Delta(4) steroid, a 20-oxo steroid, an 11-oxo steroid, a corticosteroid and a primary alpha-hydroxy ketone. It derives from a corticosterone.
Based on the above examples, analyse the similarities and differences between the examples and finally generate a caption for the molecule: C[C@]12CCC(=O)C=C1CC[C@@H]3[C@@H]2C(=O)C[C@]\\4([C@H]3CC/C4=C/C(=O)OC)C."""
generation_config = GenerationConfig(
do_sample=True,
temperature=0.7,
top_p=0.85,
top_k=40,
num_beams=1,
repetition_penalty=1.0,
pad_token_id=0,
)
inputs = tk(text, return_tensors="pt", return_token_type_ids=False)
outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, num_return_sequences=1, max_new_tokens=256, generation_config=generation_config)
# decode
decoded = tk.decode(outputs.sequences[0], skip_special_tokens=True)
print(decoded)
```
Paper Link: https://arxiv.org/abs/2403.04197 |