File size: 1,002 Bytes
40ab8c6 cc2b29e 40ab8c6 cc2b29e 40ab8c6 cc2b29e 40ab8c6 2967ddc 40ab8c6 2967ddc cc2b29e 40ab8c6 cc2b29e 40ab8c6 cc2b29e 40ab8c6 cc2b29e 40ab8c6 cc2b29e 2959dc0 cc2b29e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
license: mit
tags:
- biology
- genomics
- long-context
library_name: transformers
---
# DNAFlash
## Abouts
### Dependencies
```
rotary_embedding_torch
einops
```
## How to use
### Simple example: embedding
```python
import torch
from transformers import AutoTokenizer, AutoModel
# Load the tokenizer and model using the pretrained model name
tokenizer = AutoTokenizer.from_pretrained("isyslab/DNAFlash")
model = AutoModel.from_pretrained("isyslab/DNAFlash", trust_remote_code=True)
# Define input sequences
sequences = [
"GAATTCCATGAGGCTATAGAATAATCTAAGAGAAATATATATATATTGAAAAAAAAAAAAAAAAAAAAAAAGGGG"
]
# Tokenize the sequences
inputs = tokenizer(
sequences,
add_special_tokens=True,
return_tensors="pt",
padding=True,
truncation=True
)
# Perform a forward pass through the model to obtain the outputs, including hidden states
with torch.inference_mode():
outputs = model(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
```
## Citation
|