Fill-Mask
Transformers
PyTorch
esm
Inference Endpoints
File size: 3,126 Bytes
90c7dd4
 
 
2d59622
4f08905
b9efd83
906448b
 
2d59622
90c7dd4
 
580ec07
d120392
580ec07
 
d120392
 
 
 
 
 
 
580ec07
 
 
 
d120392
 
580ec07
6aa701a
 
 
 
580ec07
 
d120392
 
580ec07
 
 
 
 
 
d120392
 
 
 
580ec07
 
d120392
580ec07
 
90c7dd4
8bdf5c8
 
91aea3e
8bdf5c8
483b89f
91aea3e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: cc-by-nc-nd-4.0
---
# FusOn-pLM: A Fusion Oncoprotein-Specific Language Model via Focused Probabilistic Masking
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64cd5b3f0494187a9e8b7c69/XLtoAgSNqYDYdTSHEdVqS.png)
In this work, we introduce **FusOn-pLM**, a novel pLM that fine-tunes the state-of-the-art [ESM-2-650M](https://huggingface.co/facebook/esm2_t33_650M_UR50D) protein language model (pLM) on fusion oncoprotein sequences, those that drive a large portion of pediatric cancers but are heavily disordered and undruggable. We specifically introduce a novel masked language modeling (MLM) strategy, employing a binding-site probability predictor to focus masking on key amino acid residues, thereby generating more optimal fusion oncoprotein-aware embeddings. Our model improves performance on both fusion oncoprotein-specific benchmarks and disorder prediction tasks in comparison to baseline ESM-2 representations, as well as manually-constructed biophysical embeddings, motivating downstream usage of FusOn-pLM embeddings for therapeutic design tasks targeting these fusions. Please feel free to try out our embeddings and reach out if you have any questions!


**How to generate FusOn-pLM embeddings for your fusion oncoprotein:**

```
from transformers import AutoTokenizer, AutoModel
import logging
import torch

# Suppress warnings about newly initialized 'esm.pooler.dense.bias', 'esm.pooler.dense.weight' layers - these are not used to extract embeddings
logging.getLogger("transformers.modeling_utils").setLevel(logging.ERROR)

# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")

# Load the tokenizer and model
model_name = "ChatterjeeLab/FusOn-pLM" 
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
model.to(device)
model.eval()

# Example fusion oncoprotein sequence: MLLT10:PICALM, associated with Acute Myeloid Leukemia (LAML)  
# Amino acids 1-80 are derived from the head gene, MLLT10
# Amino acids 81-119 are derived from the tail gene, PICALM
sequence = "MVSSDRPVSLEDEVSHSMKEMIGGCCVCSDERGWAENPLVYCDGHGCSVAVHQACYGIVQVPTGPWFCRKCESQERAARVPPQMGSVPVMTQPTLIYSQPVMRPPNPFGPVSGAQIQFM"

# Tokenize the input sequence
inputs = tokenizer(sequence, return_tensors="pt", padding=True, truncation=True,max_length=2000)
inputs = {k: v.to(device) for k, v in inputs.items()}

# Get the embeddings
with torch.no_grad():
    outputs = model(**inputs)
    # The embeddings are in the last_hidden_state tensor
    embeddings = outputs.last_hidden_state
    # remove extra dimension
    embeddings = embeddings.squeeze(0)
    # remove BOS and EOS tokens
    embeddings = embeddings[1:-1, :]

# Convert embeddings to numpy array (if needed)
embeddings = embeddings.numpy()

print("Per-residue embeddings shape:", embeddings.shape)

```

## Repository Authors

[Sophia Vincoff](mailto:sophia.vincoff@duke.edu), PhD Student at Duke University <br>
[Pranam Chatterjee](mailto:pranam.chatterjee@duke.edu), Assistant Professor at Duke University 

Reach out to us with any questions!