jasonliu commited on
Commit
a0a16de
1 Parent(s): 43783d4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **ESM-1b**
2
+
3
+ ESM-1b ([paper](https://www.pnas.org/content/118/15/e2016239118#:~:text=https%3A//doi.org/10.1073/pnas.2016239118), [repository](https://github.com/facebookresearch/esm)) is a transformer protein language model, trained on protein sequence data without label supervision. The model is pretrained on Uniref50 with an unsupervised masked language modeling (MLM) objective, meaning the model is trained to predict amino acids from the surrounding sequence context. This pretraining objective allows ESM-1b to learn generally useful features which can be transferred to downstream prediction tasks. ESM-1b has been evaluated on a variety of tasks related to protein structure and function, including remote homology detection, secondary structure prediction, contact prediction, and prediction of the effect of mutations on function, producing state-of-the-art results.
4
+
5
+
6
+ ## **Model description**
7
+
8
+ The ESM-1b model is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and training procedure, using the Uniref50 2018_03 database of protein sequences. Note that the pretraining is on the raw protein sequences only. The training is purely unsupervised -- during training no labels are given related to structure or function.
9
+
10
+ Training is with the masked language modeling objective. The masking follows the procedure of [Devlin et al. 2019](https://arxiv.org/abs/1810.04805), randomly masking 15% of the amino acids in the input, and includes the pass-through and random token noise. One architecture difference from the RoBERTa model is that ESM-1b uses [pre-activation layer normalization](https://arxiv.org/abs/1603.05027).
11
+
12
+ The learned representations can be used as features for downstream tasks. For example if you have a dataset of measurements of protein activity you can fit a regression model on the features output by ESM-1b to predict the activity of new sequences. The model can also be fine-tuned.
13
+
14
+ ESM-1b can infer information about the structure and function of proteins without further supervision, i.e. it is capable of zero-shot transfer to structure and function prediction. [Rao et al. 2020](https://openreview.net/pdf?id=fylclEqgvgd) found that the attention heads of ESM-1b directly correspond to contacts in the 3d structure of the protein. [Meier et al. 2021](https://openreview.net/pdf?id=uXc42E9ZPFs) found that ESM-1b can be used to score the effect of sequence variations on protein function.
15
+
16
+
17
+ ## **Intended uses & limitations**
18
+
19
+ The model can be used for feature extraction, fine-tuned on downstream tasks, or used directly to make inferences about the structure and function of protein sequences.
20
+
21
+
22
+ ### **How to use**
23
+
24
+ You can use this model with a pipeline for masked language modeling:
25
+
26
+
27
+ ```
28
+ >>> from transformers import ESMForMaskedLM, ESMTokenizer, pipeline
29
+ >>> tokenizer = ESMTokenizer.from_pretrained("facebook/esm-1b", do_lower_case=False)
30
+ >>> model = ESMForMaskedLM.from_pretrained("facebook/esm-1b")
31
+ >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
32
+ >>> unmasker('QERLKSIVRILE<mask>SLGYNIVAT')
33
+
34
+ [{'sequence': 'Q E R L K S I V R I L E E S L G Y N I V A T',
35
+ 'score': 0.0933581069111824,
36
+ 'token': 9,
37
+ 'token_str': 'E'},
38
+ {'sequence': 'Q E R L K S I V R I L E K S L G Y N I V A T',
39
+ 'score': 0.09198431670665741,
40
+ 'token': 15,
41
+ 'token_str': 'K'},
42
+ {'sequence': 'Q E R L K S I V R I L E S S L G Y N I V A T',
43
+ 'score': 0.06775771081447601,
44
+ 'token': 8,
45
+ 'token_str': 'S'},
46
+ {'sequence': 'Q E R L K S I V R I L E L S L G Y N I V A T',
47
+ 'score': 0.0661069005727768,
48
+ 'token': 4,
49
+ 'token_str': 'L'},
50
+ {'sequence': 'Q E R L K S I V R I L E R S L G Y N I V A T',
51
+ 'score': 0.06330915540456772,
52
+ 'token': 10,
53
+ 'token_str': 'R'}]
54
+ ```
55
+
56
+
57
+ Here is how to use this model to get the features of a given protein sequence in PyTorch:
58
+
59
+
60
+ ```
61
+ from transformers import ESMForMaskedLM, ESMTokenizer
62
+ tokenizer = ESMTokenizer.from_pretrained("facebook/esm-1b", do_lower_case=False )
63
+ model = ESMForMaskedLM.from_pretrained("facebook/esm-1b")
64
+ sequence_Example = "QERLKSIVRILE"
65
+ encoded_input = tokenizer(sequence_Example, return_tensors='pt')
66
+ output = model(**encoded_input)
67
+ ```
68
+
69
+
70
+
71
+ ## **Training data**
72
+
73
+ The ESM-1b model was pretrained on [Uniref50](https://www.uniprot.org/downloads) 2018-03, a dataset consisting of approximately 30 million protein sequences.
74
+
75
+
76
+ ## **Training procedure**
77
+
78
+
79
+ ### **Preprocessing**
80
+
81
+ The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The inputs of the model are then of the form:
82
+
83
+
84
+ ```
85
+ <cls> Protein Sequence A
86
+ ```
87
+
88
+
89
+ During training, sequences longer than 1023 tokens (without CLS) are randomly cropped to a length of 1023.
90
+
91
+ The details of the masking procedure for each sequence follow Devlin et al. 2019:
92
+
93
+
94
+
95
+ * 15% of the amino acids are masked.
96
+ * In 80% of the cases, the masked amino acids are replaced by `<mask>`.
97
+ * In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace.
98
+ * In the 10% remaining cases, the masked amino acids are left as is.
99
+
100
+
101
+ ### **Pretraining**
102
+
103
+ The model was trained on 128 NVIDIA v100 GPUs for 500K updates, using sequence length 1024 (131,072 tokens per batch). The optimizer used is Adam (betas=[0.9, 0.999]) with a learning rate of 1e-4, a weight decay of 0, learning rate warmup for 16k steps and inverse square root decay of the learning rate after.