Transformers
Inference Endpoints
Edit model card

HyenaDNA

Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to 1 million tokens at single nucleotide resolution.

See below for an overview of the model and training. Better yet, check out these resources.

Resources:

Links to all HuggingFace models:

See GPU requirements for each model.

Sample snippet

This code example lets you select which pretrained model to load from HuggingFace, perform inference and get embeddings.

See the colab for these classes, or the 'huggingface.py' script in the main github.


# instantiate pretrained model
pretrained_model_name = 'hyenadna-medium-450k-seqlen'
max_length = 450_000

model = HyenaDNAPreTrainedModel.from_pretrained(
    './checkpoints',
    pretrained_model_name,
)

# create tokenizer, no training involved :)
tokenizer = CharacterTokenizer(
    characters=['A', 'C', 'G', 'T', 'N'],  # add DNA characters
    model_max_length=max_length,
)

# create a sample
sequence = 'ACTG' * int(max_length/4)
tok_seq = tokenizer(sequence)["input_ids"]

# place on device, convert to tensor
tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device)  # unsqueeze for batch dim

# prep model and forward
model.to(device)
model.eval()  # deterministic

with torch.inference_mode():
    embeddings = model(tok_seq)

print(embeddings.shape)  # embeddings here!

How to use pretrained weights

The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see this example first.

Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning all of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it to Pytorch Lightning via command line or config. See the github README for how to do all that.

If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in 'huggingface.py' too.

GPU requirements (suggested)

Here are suggestions on the hardware (preferred minimum) we think you can use for each model.

GPU during: Pretrain, fine-tune, inference

T4: 16GB
A100-40: 40GB
A100-80: 80GB

Model & Training Overview

HyenaDNA uses a simple stack of Hyena operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.

This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).

We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a global receptive field at each layer.

We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).

HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.

Check out our blog for more details on HyenaDNA!

Authors

Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.

Contact

Eric Nguyen, etnguyen@stanford.edu
Michael Poli, poli@stanford.edu
Marjan Faizi, Marjan_Faizi@hms.harvard.edu

Citation

Feel free to cite us :)

@article{nguyen2023hyenadna,
      title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution}, 
      author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré},
      year={2023},
      eprint={2306.15794},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Downloads last month
24
Inference API
Unable to determine this model’s pipeline type. Check the docs .