lhallee commited on
Commit
3e2b001
·
verified ·
1 Parent(s): cfa15aa

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +34 -16
README.md CHANGED
@@ -53,7 +53,7 @@ from transformers import AutoModel
53
  model = AutoModel.from_pretrained(
54
  "Synthyra/FastESMFold",
55
  trust_remote_code=True,
56
- torch_dtype=torch.float32,
57
  ).cuda().eval()
58
 
59
  # Standard fold (no TTT)
@@ -175,25 +175,43 @@ TTT parameters are set via `config.ttt_config` (a dict) or by modifying `model._
175
 
176
  ## Citations
177
 
178
- If you use this implementation, please cite FastPLMs and the original ProteinTTT paper:
179
-
180
  ```bibtex
181
  @misc{FastPLMs,
182
- author = {Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
183
- title = {FastPLMs: Fast, efficient, protein language model inference from Huggingface AutoModel.},
184
- year = {2024},
185
- url = {https://huggingface.co/Synthyra/ESMplusplus_small},
186
- DOI = {10.57967/hf/3726},
187
- publisher = {Hugging Face}
188
  }
 
189
 
 
190
  @misc{bushuiev2026proteinneed,
191
- title = {One protein is all you need},
192
- author = {Anton Bushuiev and Roman Bushuiev and Olga Pimenova and Nikola Zadorozhny and Raman Samusevich and Elisabet Manaskova and Rachel Seongeun Kim and Hannes St\"ark and Jiri Sedlar and Martin Steinegger and Tom\'a\v{s} Pluskal and Josef Sivic},
193
- year = {2026},
194
- eprint = {2411.02109},
195
- archivePrefix= {arXiv},
196
- primaryClass = {cs.LG},
197
- url = {https://arxiv.org/abs/2411.02109},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
198
  }
199
  ```
 
53
  model = AutoModel.from_pretrained(
54
  "Synthyra/FastESMFold",
55
  trust_remote_code=True,
56
+ dtype=torch.float32,
57
  ).cuda().eval()
58
 
59
  # Standard fold (no TTT)
 
175
 
176
  ## Citations
177
 
 
 
178
  ```bibtex
179
  @misc{FastPLMs,
180
+ author={Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
181
+ title={FastPLMs: Fast, efficient, protein language model inference from Huggingface AutoModel.},
182
+ year={2024},
183
+ url={https://huggingface.co/Synthyra/ESMplusplus_small},
184
+ DOI={10.57967/hf/3726},
185
+ publisher={Hugging Face}
186
  }
187
+ ```
188
 
189
+ ```bibtex
190
  @misc{bushuiev2026proteinneed,
191
+ title={One protein is all you need},
192
+ author={Anton Bushuiev and Roman Bushuiev and Olga Pimenova and Nikola Zadorozhny and Raman Samusevich and Elisabet Manaskova and Rachel Seongeun Kim and Hannes St\"ark and Jiri Sedlar and Martin Steinegger and Tom\'a\v{s} Pluskal and Josef Sivic},
193
+ year={2026},
194
+ eprint={2411.02109},
195
+ archivePrefix={arXiv},
196
+ primaryClass={cs.LG},
197
+ url={https://arxiv.org/abs/2411.02109}
198
+ }
199
+ ```
200
+
201
+ ```bibtex
202
+ @article{dong2024flexattention,
203
+ title={Flex Attention: A Programming Model for Generating Optimized Attention Kernels},
204
+ author={Dong, Juechu and Feng, Boyuan and Guessous, Driss and Liang, Yanbo and He, Horace},
205
+ journal={arXiv preprint arXiv:2412.05496},
206
+ year={2024}
207
+ }
208
+ ```
209
+
210
+ ```bibtex
211
+ @inproceedings{paszke2019pytorch,
212
+ title={PyTorch: An Imperative Style, High-Performance Deep Learning Library},
213
+ author={Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and K{\"o}pf, Andreas and Yang, Edward and DeVito, Zach and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
214
+ booktitle={Advances in Neural Information Processing Systems 32},
215
+ year={2019}
216
  }
217
  ```