metadata
license: mit
tags:
- biology
- genomics
- long-context
library_name: transformers
DNAFlash
Abouts
Dependencies
rotary_embedding_torch
einops
How to use
Simple example: embedding
import torch
from transformers import AutoTokenizer, AutoModel
# Load the tokenizer and model using the pretrained model name
tokenizer = AutoTokenizer.from_pretrained("isyslab/DNAFlash")
model = AutoModel.from_pretrained("isyslab/DNAFlash", trust_remote_code=True)
# Define input sequences
sequences = [
"GAATTCCATGAGGCTATAGAATAATCTAAGAGAAATATATATATATTGAAAAAAAAAAAAAAAAAAAAAAAGGGG"
]
# Tokenize the sequences
inputs = tokenizer(
sequences,
add_special_tokens=True,
return_tensors="pt",
padding=True,
truncation=True
)
# Perform a forward pass through the model to obtain the outputs, including hidden states
with torch.inference_mode():
outputs = model(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])