Transformers
Inference Endpoints
exnx's picture
update readme
59a6457
|
raw
history blame
5.01 kB
metadata
license: bsd-3-clause

HyenaDNA

Welcome! HyenaDNA is a genomic foundation model pretrained on the human reference genome with sequence lengths of up to 1 million tokens at single nucleotide resolution.

See below for an overview of the model and training. Better yet, check out these resources.

Checkout our other resources:

Links to all HuggingFace models:

Sample snippet

This code example lets you select which pretrained model to load from HuggingFace and do inference to get embeddings.

See the huggingface.py script in the main github, or the colab for these classes.


# instantiate pretrained model
pretrained_model_name = 'hyenadna-medium-450k-seqlen'
max_length = 450k

model = HyenaDNAPreTrainedModel.from_pretrained(
    './checkpoints',
    pretrained_model_name,
)

# create tokenizer
tokenizer = CharacterTokenizer(
    characters=['A', 'C', 'G', 'T', 'N'],  # add DNA characters
    model_max_length=max_length,
)

# create a sample
sequence = 'ACTG' * int(max_length/4)
tok_seq = tokenizer(sequence)["input_ids"]

# place on device, convert to tensor
tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device)  # unsqueeze for batch dim

# prep model and forward
model.to(device)

with torch.inference_mode():
    embeddings = model(tok_seq)

print(embeddings.shape)  # embeddings here!

How to use pretrained weights

The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see how.

Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning most of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it.

If you want a standalone version that's easy to port into your own code, we have that and a HuggingFace example in the repo too, under huggingface.py.

Model & Training Overview

HyenaDNA uses a simple stack of Hyena operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.

This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).

We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a global receptive field at each layer.

We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).

HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.

Check out our blog for more details on HyenaDNA!

Authors

Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch Sykes, Michael Wornow, Stefano Massaroli, Aman Patel, Clayton Rabideau, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.

Contact

Eric Nguyen, etnguyen@stanford.edu
Michael Poli, poli@stanford.edu
Marjan Faizi, Marjan_Faizi@hms.harvard.edu

Citation

If you use HyenaDNA in your work, feel free to cite us :)