File size: 1,374 Bytes
3fcef8b
 
7dd21e1
 
 
a845218
12e73a0
f3c3665
 
 
 
 
 
 
 
a845218
12e73a0
 
4e2e6d0
 
 
 
 
 
522d0fc
4e2e6d0
 
 
 
 
522d0fc
 
4e2e6d0
 
522d0fc
 
4e2e6d0
 
 
 
 
 
 
522d0fc
db72bbf
4e2e6d0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
library_name: transformers
tags:
- protein
license: bsd-3-clause
---

ProGen2-small finetuned on 7 protein families:
 - PF00002 - GPCRs
 - PF00042 - Globins
 - PF00125 - Core histones
 - PF00127 - Copper binding proteins
 - PF00257 - Dehydrins
 - PF00262 - Calreticulins
 - PF03668 - P-loop ATPase

Bidirectional model trained on both N -> C and C -> N directions of protein sequences, specified by tokens "1" and "2" respectively. 

See my [github repo](https://github.com/hugohrban/ProGen2-finetuning/tree/main) for more information.

Example usage:

```python
from transformers import AutoModelForCausalLM
from tokenizers import Tokenizer
import torch
import torch.nn.functional as F

# load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-small-mix7-bidi", trust_remote_code=True)
tokenizer = Tokenizer.from_pretrained("hugohrban/progen2-small-mix7-bidi")
tokenizer.no_padding()

# prepare input
prompt = "<|pf00125|>2FDDDVSAVKSTGVSK"
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device)

# forward pass
logits = model(input_ids).logits

# print output probabilities
next_token_logits = logits[-1, :]
next_token_probs = F.softmax(next_token_logits, dim=-1)
for i in range(tokenizer.get_vocab_size(with_added_tokens=False)):
    print(f"{tokenizer.id_to_token(i)}: {100 * next_token_probs[i].item():.2f} %")
```