Instructions to use Synthyra/ESM2-650M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Synthyra/ESM2-650M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="Synthyra/ESM2-650M", trust_remote_code=True)# Load model directly from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("Synthyra/ESM2-650M", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Upload modeling_fastesm.py with huggingface_hub
Browse files- modeling_fastesm.py +6 -1
modeling_fastesm.py
CHANGED
|
@@ -25,7 +25,12 @@ from transformers.models.esm.modeling_esm import (
|
|
| 25 |
EsmClassificationHead,
|
| 26 |
)
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
|
| 31 |
def _create_pad_block_mask(attention_mask_2d: torch.Tensor):
|
|
|
|
| 25 |
EsmClassificationHead,
|
| 26 |
)
|
| 27 |
|
| 28 |
+
try:
|
| 29 |
+
# when used from AutoModel, these are in the same directory
|
| 30 |
+
from .embedding_mixin import EmbeddingMixin, Pooler
|
| 31 |
+
except:
|
| 32 |
+
# when running from our repo, these are in the base directory
|
| 33 |
+
from embedding_mixin import EmbeddingMixin, Pooler
|
| 34 |
|
| 35 |
|
| 36 |
def _create_pad_block_mask(attention_mask_2d: torch.Tensor):
|