shmelev commited on
Commit
da975ed
1 Parent(s): acb61d8

Upload file

Browse files
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - dna
4
+ - human_genome
5
+ ---
6
+
7
+ # WARNING
8
+
9
+ This readme should be changed according to current model. Num steps: 800000
10
+
11
+ # GENA-LM
12
+
13
+ GENA-LM is a transformer masked language model trained on human DNA sequence.
14
+
15
+ Differences between GENA-LM and DNABERT:
16
+ - BPE tokenization instead of k-mers;
17
+ - input sequence size is about 3000 nucleotides (512 BPE tokens) compared to 510 nucleotides of DNABERT
18
+ - pre-training on T2T vs. GRCh38.p13 human genome assembly.
19
+
20
+ Source code and data: https://github.com/AIRI-Institute/GENA_LM
21
+
22
+ ## Examples
23
+ ### How to load the model to fine-tune it on classification task
24
+ ```python
25
+ from src.gena_lm.modeling_bert import BertForSequenceClassification
26
+ from transformers import AutoTokenizer
27
+
28
+ tokenizer = AutoTokenizer.from_pretrained('AIRI-Institute/gena-lm-bert-base')
29
+ model = BertForSequenceClassification.from_pretrained('AIRI-Institute/gena-lm-bert-base')
30
+ ```
31
+
32
+ ## Model description
33
+ GENA-LM model is trained in a masked language model (MLM) fashion, following the methods proposed in the BigBird paper by masking 85% of tokens. Model config for `gena-lm-bert-base` is similar to the bert-base:
34
+
35
+ - 512 Maximum sequence length
36
+ - 12 Layers, 12 Attention heads
37
+ - 768 Hidden size
38
+ - 32k Vocabulary size
39
+
40
+ We pre-trained `gena-lm-bert-base` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). Pre-training was performed for 500,000 iterations with the same parameters as in BigBird, except sequence length was equal to 512 tokens and we used pre-layer normalization in Transformer.
41
+
42
+ ## Downstream tasks
43
+ Currently, gena-lm-bert-base model has been finetuned and tested on promoter prediction task. Its' performance is comparable to previous SOTA results. We plan to fine-tune and make available models for other downstream tasks in the near future.
44
+
45
+ ### Fine-tuning GENA-LM on our data and scoring
46
+ After fine-tuning gena-lm-bert-base on promoter prediction dataset, following results were achieved:
47
+
48
+ | model | seq_len (bp) | F1 |
49
+ |--------------------------|--------------|-------|
50
+ | DeePromoter | 300 | 95.60 |
51
+ | GENA-LM bert-base (ours) | 2000 | 95.72 |
52
+ | BigBird | 16000 | 99.90 |
53
+
54
+ We can conclude that our model achieves comparable performance to the previously published results for promoter prediction task.
55
+
config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoModel": "modeling_bert.BertForMaskedLM"
7
+ },
8
+ "attention_probs_dropout_prob": 0.1,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-5,
16
+ "max_position_embeddings": 4096,
17
+ "model_type": "bert",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 3,
21
+ "pre_layer_norm": true,
22
+ "last_layer_norm": true,
23
+ "position_embedding_type": "absolute",
24
+ "transformers_version": "4.6.0.dev0",
25
+ "type_vocab_size": 2,
26
+ "use_cache": true,
27
+ "vocab_size": 32000,
28
+ "sparse_config_cls": "deepspeed.ops.sparse_attention:BigBirdSparsityConfig",
29
+ "sparse_attention": {
30
+ "num_heads": 12,
31
+ "block": 64,
32
+ "different_layout_per_head": true,
33
+ "num_sliding_window_blocks": 3,
34
+ "num_global_blocks": 2,
35
+ "num_random_blocks": 3
36
+ }
37
+ }
38
+
modeling_bert.py ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81baff672ab9a6e139140330c8e1df30d3bba00885407fd6e6741d692dda04f9
3
+ size 556893689
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"tokenizer_class": "PreTrainedTokenizerFast"}