|
--- |
|
license: mit |
|
--- |
|
|
|
# ESM-2 for Post Translational Modification |
|
|
|
This is a LoRA finetuned version of `esm2_t12_35M_UR50D` for predicting post translational modification sites. |
|
|
|
## Metrics |
|
|
|
```python |
|
"eval_loss": 0.4661065936088562, |
|
"eval_accuracy": 0.9876599555715365, |
|
"eval_auc": 0.8673592596422711, |
|
"eval_precision": 0.14941997670219148, |
|
"eval_recall": 0.7463955099754822 |
|
"eval_f1": 0.24899413187145658, |
|
"eval_mcc": 0.3305508498121041, |
|
``` |
|
|
|
## Using the Model |
|
|
|
To use this model, run the following: |
|
``` |
|
!pip install transformers -q |
|
!pip install peft -q |
|
``` |
|
|
|
```python |
|
from transformers import AutoModelForTokenClassification, AutoTokenizer |
|
from peft import PeftModel |
|
import torch |
|
|
|
# Path to the saved LoRA model |
|
model_path = "AmelieSchreiber/esm2_t12_35M_ptm_lora_2100K" |
|
# ESM2 base model |
|
base_model_path = "facebook/esm2_t12_35M_UR50D" |
|
|
|
# Load the model |
|
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path) |
|
loaded_model = PeftModel.from_pretrained(base_model, model_path) |
|
|
|
# Ensure the model is in evaluation mode |
|
loaded_model.eval() |
|
|
|
# Load the tokenizer |
|
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path) |
|
|
|
# Protein sequence for inference |
|
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence |
|
|
|
# Tokenize the sequence |
|
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length') |
|
|
|
# Run the model |
|
with torch.no_grad(): |
|
logits = loaded_model(**inputs).logits |
|
|
|
# Get predictions |
|
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens |
|
predictions = torch.argmax(logits, dim=2) |
|
|
|
# Define labels |
|
id2label = { |
|
0: "No ptm site", |
|
1: "ptm site" |
|
} |
|
|
|
# Print the predicted labels for each token |
|
for token, prediction in zip(tokens, predictions[0].numpy()): |
|
if token not in ['<pad>', '<cls>', '<eos>']: |
|
print((token, id2label[prediction])) |
|
``` |