Edit model card

Model Card for Mistral-chem-v0.1

The Mistral-chem-v0.1 Large Language Model (LLM) is a pretrained generative chemical molecule model with 14.55M parameters x 64 experts = 931.5M parameters. It is derived from Mistral-7B-v0.1 model, which was simplified for chemistry: the number of layers and the hidden size were reduced. The model was pretrained using 250k molecule SMILES strings from the Zinc database.

This version v0.1 of Mistral-chem corresponds to a simple model, which was primarly designed for low computational resources (the aim was not to get the best accuracy results). Moreover, the model was trained on only 250k molecules.

For full details of this model please read our github repo.

Model Architecture

Like Mistral-7B-v0.1, it is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Load the model from huggingface:

import torch
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-chem-v0.1", trust_remote_code=True) 
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-chem-v0.1", trust_remote_code=True)

Calculate the embedding of a DNA sequence

dna = "CCCCC[C@H](Br)CC"
inputs = tokenizer(dna, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]

# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256

Troubleshooting

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

Notice

Mistral-chem is a pretrained base model for chemistry.

Contact

Raphaël Mourad. raphael.mourad@univ-tlse3.fr

Downloads last month
3
Safetensors
Model size
932M params
Tensor type
F32
·