julien-c HF staff commited on
Commit
3d05bf0
1 Parent(s): 61e0d1a

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/Rostlab/prot_bert/README.md

Files changed (1) hide show
  1. README.md +141 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: protein
3
+ tags:
4
+ - protein language model
5
+ datasets:
6
+ - Uniref100
7
+ ---
8
+
9
+ # ProtBert model
10
+
11
+ Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in
12
+ [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
13
+ [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
14
+
15
+
16
+ ## Model description
17
+
18
+ ProtBert is based on Bert model which pretrained on a large corpus of protein sequences in a self-supervised fashion.
19
+ This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of
20
+ publicly available data) with an automatic process to generate inputs and labels from those protein sequences.
21
+
22
+ One important difference between our Bert model and the original Bert version is the way of dealing with sequences as separate documents.
23
+ This means the Next sentence prediction is not used, as each sequence is treated as a complete document.
24
+ The masking follows the original Bert training with randomly masks 15% of the amino acids in the input.
25
+
26
+ At the end, the feature extracted from this model revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein
27
+ shape.
28
+ This implied learning some of the grammar of the language of life realized in protein sequences.
29
+
30
+ ## Intended uses & limitations
31
+
32
+ The model could be used for protein feature extraction or to be fine-tuned on downstream tasks.
33
+ We have noticed in some tasks you could gain more accuracy by fine-tuning the model rather than using it as a feature extractor.
34
+
35
+ ### How to use
36
+
37
+ You can use this model directly with a pipeline for masked language modeling:
38
+
39
+ ```python
40
+ >>> from transformers import BertForMaskedLM, BertTokenizer, pipeline
41
+ >>> tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False )
42
+ >>> model = BertForMaskedLM.from_pretrained("Rostlab/prot_bert")
43
+ >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
44
+ >>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T')
45
+
46
+ [{'score': 0.11088453233242035,
47
+ 'sequence': '[CLS] D L I P T S S K L V V L D T S L Q V K K A F F A L V T [SEP]',
48
+ 'token': 5,
49
+ 'token_str': 'L'},
50
+ {'score': 0.08402521163225174,
51
+ 'sequence': '[CLS] D L I P T S S K L V V S D T S L Q V K K A F F A L V T [SEP]',
52
+ 'token': 10,
53
+ 'token_str': 'S'},
54
+ {'score': 0.07328339666128159,
55
+ 'sequence': '[CLS] D L I P T S S K L V V V D T S L Q V K K A F F A L V T [SEP]',
56
+ 'token': 8,
57
+ 'token_str': 'V'},
58
+ {'score': 0.06921856850385666,
59
+ 'sequence': '[CLS] D L I P T S S K L V V K D T S L Q V K K A F F A L V T [SEP]',
60
+ 'token': 12,
61
+ 'token_str': 'K'},
62
+ {'score': 0.06382402777671814,
63
+ 'sequence': '[CLS] D L I P T S S K L V V I D T S L Q V K K A F F A L V T [SEP]',
64
+ 'token': 11,
65
+ 'token_str': 'I'}]
66
+ ```
67
+
68
+ Here is how to use this model to get the features of a given protein sequence in PyTorch:
69
+
70
+ ```python
71
+ from transformers import BertModel, BertTokenizer
72
+ import re
73
+ tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False )
74
+ model = BertModel.from_pretrained("Rostlab/prot_bert")
75
+ sequence_Example = "A E T C Z A O"
76
+ sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example)
77
+ encoded_input = tokenizer(sequence_Example, return_tensors='pt')
78
+ output = model(**encoded_input)
79
+ ```
80
+
81
+ ## Training data
82
+
83
+ The ProtBert model was pretrained on [Uniref100](https://www.uniprot.org/downloads), a dataset consisting of 217 million protein sequences.
84
+
85
+ ## Training procedure
86
+
87
+ ### Preprocessing
88
+
89
+ The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The rare amino acids "U,Z,O,B" were mapped to "X".
90
+ The inputs of the model are then of the form:
91
+
92
+ ```
93
+ [CLS] Protein Sequence A [SEP] Protein Sequence B [SEP]
94
+ ```
95
+
96
+ Furthermore, each protein sequence was treated as a separate document.
97
+ The preprocessing step was performed twice, once for a combined length (2 sequences) of less than 512 amino acids, and another time using a combined length (2 sequences) of less than 2048 amino acids.
98
+
99
+ The details of the masking procedure for each sequence followed the original Bert model as following:
100
+ - 15% of the amino acids are masked.
101
+ - In 80% of the cases, the masked amino acids are replaced by `[MASK]`.
102
+ - In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace.
103
+ - In the 10% remaining cases, the masked amino acids are left as is.
104
+
105
+ ### Pretraining
106
+
107
+ The model was trained on a single TPU Pod V3-512 for 400k steps in total.
108
+ 300K steps using sequence length 512 (batch size 15k), and 100K steps using sequence length 2048 (batch size 2.5k).
109
+ The optimizer used is Lamb with a learning rate of 0.002, a weight decay of 0.01, learning rate warmup for 40k steps and linear decay of the learning rate after.
110
+
111
+ ## Evaluation results
112
+
113
+ When fine-tuned on downstream tasks, this model achieves the following results:
114
+
115
+ Test results :
116
+
117
+ | Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane |
118
+ |:-----:|:-----:|:-----:|:-----:|:-----:|
119
+ | CASP12 | 75 | 63 | | |
120
+ | TS115 | 83 | 72 | | |
121
+ | CB513 | 81 | 66 | | |
122
+ | DeepLoc | | | 79 | 91 |
123
+
124
+ ### BibTeX entry and citation info
125
+
126
+ ```bibtex
127
+ @article {Elnaggar2020.07.12.199554,
128
+ author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
129
+ title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
130
+ elocation-id = {2020.07.12.199554},
131
+ year = {2020},
132
+ doi = {10.1101/2020.07.12.199554},
133
+ publisher = {Cold Spring Harbor Laboratory},
134
+ abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \<a href="https://github.com/agemagician/ProtTrans"\>https://github.com/agemagician/ProtTrans\</a\>Competing Interest StatementThe authors have declared no competing interest.},
135
+ URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554},
136
+ eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf},
137
+ journal = {bioRxiv}
138
+ }
139
+ ```
140
+
141
+ > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)