PRESTO
Collection
PRESTO: Progressive Pretraining Enhances Synthetic Chemistry Outcomes
•
2 items
•
Updated
These are weights for a version of checkpoints/stage2/llava-moleculestm-vicuna-7b-v1.5-pretrain_all
finetuned for multimodal applications.
<molecule_2d>
in text and provide molecules
GitHub: https://github.com/IDEA-XL/PRESTO (includes training scripts and basic inference server)
yield (9515 examples)
name, pci.bus_id, vbios_version
NVIDIA RTX A6000, 00000000:01:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:25:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:41:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:61:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:81:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:A1:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:C1:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:E1:00.0, 94.02.5C.00.02
LlamaLMMForCausalLM.model =
LlamaLMMForCausalLM(
(model): LlamaLMMModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
(molecule_2d_lmm_projector): _MLPVectorProjector(
(mlp): Sequential(
(0): Linear(in_features=300, out_features=4096, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=4096, out_features=4096, bias=True)
)
)
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)