Edit model card

Model Card for Mistral-Prot-v1-134M (Mistral for protein)

The Mistral-Prot-v1-134M Large Language Model (LLM) is a pretrained generative protein molecule model with 133.8M parameters. It is derived from Mixtral-8x7B-v0.1 model, which was simplified for protein: the number of layers and the hidden size were reduced. The model was pretrained using 10M protein strings from the uniprot 50 database.

Model Architecture

Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer
  • Mixture of Experts

Load the model from huggingface:

import torch
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Prot-v1-134M", trust_remote_code=True) 
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Prot-v1-134M", trust_remote_code=True)

Calculate the embedding of a protein sequence

insulin = "MALWMRLLPLLALLALWGPDPAAAFVNQHLCGSHLVEALYLVCGERGFFYTPKTRREAEDLQVGQVELGGGPGAGSLQPLALEGSLQKRGIVEQCCTSICSLYQLENYCN"
inputs = tokenizer(insulin, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]

# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256

Troubleshooting

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

Notice

Mistral-Prot-v1-134M is a pretrained base model for protein.

Contact

Raphaël Mourad. raphael.mourad@univ-tlse3.fr

Downloads last month
10
Safetensors
Model size
134M params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.