File size: 3,295 Bytes
626ce55
 
 
 
 
 
 
 
 
30ac4b8
16fcabe
b68d8d6
 
626ce55
 
 
 
 
 
e874f92
626ce55
 
 
 
 
 
 
 
 
 
 
 
 
 
b68d8d6
626ce55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b68d8d6
626ce55
 
 
 
 
 
 
ad5b16b
626ce55
 
 
 
 
 
 
d290670
626ce55
 
 
 
 
 
 
 
 
b68d8d6
626ce55
 
 
 
 
 
 
 
 
 
4143b7f
b68d8d6
626ce55
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: mit
tags:
- biology
- transformers
- Feature Extraction
- bioRxiv 2025.01.23.634452
---

**This is repository for MutBERT (pretrained with mutation data in human genome)**.

**You can find all MutBERT variants at [here](https://huggingface.co/JadenLong).**

## Introduction

This is the official pre-trained model introduced in MutBERT: Probabilistic Genome Representation Improves Genomics Foundation Models.

We sincerely appreciate the Tochka-Al team for the ruRoPEBert implementation, which serves as the base of MutBERT development.

MutBERT is a transformer-based genome foundation model trained only on Human genome.

## Model Source

- Repository: [MutBERT](https://github.com/ai4nucleome/mutBERT)
- Paper: [MutBERT: Probabilistic Genome Representation Improves Genomics Foundation Models](https://www.biorxiv.org/content/10.1101/2025.01.23.634452v1)

## Usage

### Load tokenizer and model

```python
from transformers import AutoTokenizer, AutoModel

model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
```

The default attention is flash attention("sdpa"). If you want use basic attention, you can replace it with "eager". Please refer to [here](https://huggingface.co/JadenLong/MutBERT/blob/main/modeling_mutbert.py#L438).

### Get embeddings

```python
import torch
import torch.nn.functional as F

from transformers import AutoTokenizer, AutoModel

model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)

dna = "ATCGGGGCCCATTA"
inputs = tokenizer(dna, return_tensors='pt')["input_ids"]

mut_inputs = F.one_hot(inputs, num_classes=len(tokenizer)).float().to("cpu")  # len(tokenizer) is vocab size
last_hidden_state = model(mut_inputs).last_hidden_state   # [1, sequence_length, 768]
# or: last_hidden_state = model(mut_inputs)[0]        # [1, sequence_length, 768]

# embedding with mean pooling
embedding_mean = torch.mean(last_hidden_state[0], dim=0)
print(embedding_mean.shape) # expect to be 768

# embedding with max pooling
embedding_max = torch.max(last_hidden_state[0], dim=0)[0]
print(embedding_max.shape) # expect to be 768
```

### Using as a Classifier

```python
from transformers import AutoModelForSequenceClassification

model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True, num_labels=2)
```

### With RoPE scaling

Allowed types for RoPE scaling are: `linear` and `dynamic`. To extend the model's context window you need to add rope_scaling parameter.

If you want to scale your model context by 2x:

```python
model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
model = AutoModel.from_pretrained(model_name,
                                  trust_remote_code=True,
                                  rope_scaling={'type': 'dynamic','factor': 2.0}
                                  ) # 2.0 for x2 scaling, 4.0 for x4, etc..
```