Transformers
Inference Endpoints
exnx commited on
Commit
2fef9fc
1 Parent(s): 3523fcd

add suggested gpu requirements

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -24,6 +24,7 @@ See below for an [overview](#model) of the model and training. Better yet, check
24
  - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main)
25
  - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main)
26
 
 
27
 
28
  ### Sample snippet
29
 
@@ -82,6 +83,25 @@ Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytor
82
  If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too.
83
 
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  ## Model & Training Overview
86
  <a name="model"></a>
87
 
@@ -100,7 +120,7 @@ Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna
100
 
101
  ### Authors
102
 
103
- Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Stefano Massaroli, Clayton Rabideau, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
104
 
105
  **Contact**
106
 
 
24
  - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main)
25
  - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main)
26
 
27
+ See [GPU requirements](#hardware) for each model.
28
 
29
  ### Sample snippet
30
 
 
83
  If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too.
84
 
85
 
86
+ ### GPU requirements (suggested)
87
+ <a name="hardware"></a>
88
+
89
+ Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
90
+
91
+ GPU during: Pretrain, fine-tune, inference
92
+
93
+ - [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
94
+ - [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40, T4, T4)
95
+ - [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40, A100-40, T4)
96
+ - [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40, A100-40, T4)
97
+ - [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80, A100-80, A100-40)
98
+
99
+
100
+ T4: 16GB
101
+ A100-40: 40GB
102
+ A100-80: 80GB
103
+
104
+
105
  ## Model & Training Overview
106
  <a name="model"></a>
107
 
 
120
 
121
  ### Authors
122
 
123
+ Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
124
 
125
  **Contact**
126