Safetensors
gLM2
custom_code
andrecornman commited on
Commit
77bed56
1 Parent(s): 4e5f31b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - tattabio/OMG
4
+ license: apache-2.0
5
+ ---
6
+
7
+ # gLM2_150M
8
+
9
+ gLM2 is a mixed-modality genomic language model, trained on the [`OMG Dataset`](https://huggingface.co/datasets/tattabio/OMG).
10
+ The model encodes a genomic scaffold with both both amino-acid and DNA tokens.
11
+
12
+ gLM2 is trained at two scales: 150M and 650M parameters (available at [`tattabio/gLM2_650M`](https://huggingface.co/tattabio/gLM2_650M)).
13
+
14
+ See [https://github.com/TattaBio/gLM2](https://github.com/TattaBio/gLM2) for inference scripts.
15
+
16
+ ### Model Description
17
+
18
+ gLM2 is a transformer encoder trained with the masked language modeling objective.
19
+ It encodes a genomic contig as a sequence of protein coding sequences (CDS) and DNA inter-genic sequences (IGS).
20
+ CDS elements are tokenized using per-amino acid tokens, and IGS elements are tokenized using per-nucleotide tokens.
21
+
22
+
23
+ - To encode the genomic strand, we prepended each genomic element with a special token, either `<+>` or `<->` to indicate the positive and negative strands.
24
+ - To avoid collision between amino acid and nucleotide tokens, the tokenizer expects all amino acids to be uppercase, and all nucleotides to be lowercase.
25
+
26
+ UPDATE(09/2024): We updated the model with longer context length (4096 tokens vs. 2048 tokens) and per-nucleotide IGS tokenization instead of BPE.
27
+
28
+ ## Getting Started
29
+
30
+
31
+ ```python
32
+ import torch
33
+ from transformers import AutoModel, AutoTokenizer
34
+
35
+ model = AutoModel.from_pretrained('tattabio/gLM2_150M', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
36
+ tokenizer = AutoTokenizer.from_pretrained('tattabio/gLM2_150M', trust_remote_code=True)
37
+
38
+ # A contig with two proteins and an inter-genic sequence.
39
+ # NOTE: Nucleotides should always be lowercase, and prepended with `<+>`.
40
+ sequence = "<+>MALTKVEKRNRIKRRVRGKISGTQASPRLSVYKSNK<+>aatttaaggaa<->MLGIDNIERVKPGGLELVDRLVAVNRVTKVTKGGRAFGFSAIVVVGNED"
41
+
42
+ # Tokenize the sequence.
43
+ encodings = tokenizer([sequence], return_tensors='pt')
44
+
45
+ # Extract embeddings.
46
+ with torch.no_grad():
47
+ embeddings = model(encodings.input_ids.cuda(), output_hidden_states=True).last_hidden_state
48
+
49
+ ```
50
+
51
+ ### Training Data
52
+
53
+ gLM2 is trained on the [`OMG`](https://huggingface.co/datasets/tattabio/OMG) dataset.
54
+ To improve the dataset balance and remove near-duplicate examples, the data is tokenized and pruned by applying Semantic Deduplication [SemDedup](https://arxiv.org/abs/2303.09540).
55
+ We use an embedding distance threshold of 2e-3, resulting in 49% of the dataset being pruned.
56
+
57
+ ## Training Details
58
+
59
+ - Pretraining tokens: 315B
60
+ - Context length: 4096
61
+ - Masking rate: 30%
62
+ - Learning rate: 1e-3
63
+ - Optimizer: AdamW (betas = (0.9, 0.95))
64
+ - Mixed precision training: bfloat16
65
+ - Weight decay: 0.1
66
+
67
+
68
+ ## Citation
69
+
70
+ **BioRxiv:**
71
+ [https://www.biorxiv.org/content/10.1101/2024.08.14.607850](https://www.biorxiv.org/content/10.1101/2024.08.14.607850)
72
+
73
+ **BibTeX:**
74
+
75
+ ```@article {Cornman2024.08.14.607850,
76
+ author = {Cornman, Andre and West-Roberts, Jacob and Camargo, Antonio Pedro and Roux, Simon and Beracochea, Martin and Mirdita, Milot and Ovchinnikov, Sergey and Hwang, Yunha},
77
+ title = {The OMG dataset: An Open MetaGenomic corpus for mixed-modality genomic language modeling},
78
+ elocation-id = {2024.08.14.607850},
79
+ year = {2024},
80
+ doi = {10.1101/2024.08.14.607850},
81
+ publisher = {Cold Spring Harbor Laboratory},
82
+ URL = {https://www.biorxiv.org/content/early/2024/08/17/2024.08.14.607850},
83
+ eprint = {https://www.biorxiv.org/content/early/2024/08/17/2024.08.14.607850.full.pdf},
84
+ journal = {bioRxiv}
85
+ }