|
--- |
|
license: cc-by-nc-nd-4.0 |
|
--- |
|
# FusOn-pLM: A Fusion Oncoprotein-Specific Language Model via Focused Probabilistic Masking |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64cd5b3f0494187a9e8b7c69/eR38p4VJhWJhwsqjZZdYp.png) |
|
In this work, we introduce **FusOn-pLM**, a novel pLM that fine-tunes the state-of-the-art [ESM-2-650M](https://huggingface.co/facebook/esm2_t33_650M_UR50D) protein language model (pLM) on fusion oncoprotein sequences, those that drive a large portion of pediatric cancers but are heavily disordered and undruggable. We specifically introduce a novel masked language modeling (MLM) strategy, employing a binding-site probability predictor to focus masking on key amino acid residues, thereby generating more optimal fusion oncoprotein-aware embeddings. Our model improves performance on both fusion oncoprotein-specific benchmarks and disorder prediction tasks in comparison to baseline ESM-2 representations, as well as manually-constructed biophysical embeddings, motivating downstream usage of FusOn-pLM embeddings for therapeutic design tasks targeting these fusions. Please feel free to try out our embeddings and reach out if you have any questions! |
|
|
|
|
|
**How to generate FusOn-pLM embeddings for your fusion oncoprotein:** |
|
|
|
``` |
|
from transformers import AutoTokenizer, AutoModel |
|
import torch |
|
|
|
# Load the tokenizer and model |
|
model_name = "ChatterjeeLab/FusOn-pLM" |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModel.from_pretrained(model_name) |
|
|
|
# Example fusion oncoprotein sequence: MLLT10:PICALM, associated with Acute Myeloid Leukemia (LAML) |
|
# Amino acids 1-80 are derived from the head gene, MLLT10 |
|
# Amino acids 81-119 are derived from the tail gene, PICALM |
|
sequence = "MVSSDRPVSLEDEVSHSMKEMIGGCCVCSDERGWAENPLVYCDGHGCSVAVHQACYGIVQVPTGPWFCRKCESQERAARVPPQMGSVPVMTQPTLIYSQPVMRPPNPFGPVSGAQIQFM" |
|
|
|
# Tokenize the input sequence |
|
inputs = tokenizer(sequence, return_tensors="pt") |
|
|
|
# Get the embeddings |
|
with torch.no_grad(): |
|
outputs = model(**inputs) |
|
# The embeddings are in the last_hidden_state tensor |
|
embeddings = outputs.last_hidden_state |
|
|
|
# Convert embeddings to numpy array (if needed) |
|
embeddings = embeddings.squeeze(0).numpy() |
|
|
|
print("Per-residue embeddings shape:", embeddings.shape) |
|
|
|
``` |
|
|
|
## Repository Authors |
|
|
|
[Sophia Vincoff](mailto:sophia.vincoff@duke.edu), PhD Student at Duke University <br> |
|
[Pranam Chatterjee](mailto:pranam.chatterjee@duke.edu), Assistant Professor at Duke University |
|
|
|
Reach out to us with any questions! |