|
--- |
|
tags: |
|
- dna |
|
- human_genome |
|
--- |
|
|
|
# WARNING |
|
|
|
This readme should be changed according to current model. Num steps: 810000 |
|
|
|
# GENA-LM |
|
|
|
GENA-LM is a transformer masked language model trained on human DNA sequence. |
|
|
|
Differences between GENA-LM and DNABERT: |
|
- BPE tokenization instead of k-mers; |
|
- input sequence size is about 3000 nucleotides (512 BPE tokens) compared to 510 nucleotides of DNABERT |
|
- pre-training on T2T vs. GRCh38.p13 human genome assembly. |
|
|
|
Source code and data: https://github.com/AIRI-Institute/GENA_LM |
|
|
|
## Examples |
|
### How to load the model to fine-tune it on classification task |
|
```python |
|
from src.gena_lm.modeling_bert import BertForSequenceClassification |
|
from transformers import AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained('AIRI-Institute/gena-lm-bert-base') |
|
model = BertForSequenceClassification.from_pretrained('AIRI-Institute/gena-lm-bert-base') |
|
``` |
|
|
|
## Model description |
|
GENA-LM model is trained in a masked language model (MLM) fashion, following the methods proposed in the BigBird paper by masking 85% of tokens. Model config for `gena-lm-bert-base` is similar to the bert-base: |
|
|
|
- 512 Maximum sequence length |
|
- 12 Layers, 12 Attention heads |
|
- 768 Hidden size |
|
- 32k Vocabulary size |
|
|
|
We pre-trained `gena-lm-bert-base` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). Pre-training was performed for 500,000 iterations with the same parameters as in BigBird, except sequence length was equal to 512 tokens and we used pre-layer normalization in Transformer. |
|
|
|
## Downstream tasks |
|
Currently, gena-lm-bert-base model has been finetuned and tested on promoter prediction task. Its' performance is comparable to previous SOTA results. We plan to fine-tune and make available models for other downstream tasks in the near future. |
|
|
|
### Fine-tuning GENA-LM on our data and scoring |
|
After fine-tuning gena-lm-bert-base on promoter prediction dataset, following results were achieved: |
|
|
|
| model | seq_len (bp) | F1 | |
|
|--------------------------|--------------|-------| |
|
| DeePromoter | 300 | 95.60 | |
|
| GENA-LM bert-base (ours) | 2000 | 95.72 | |
|
| BigBird | 16000 | 99.90 | |
|
|
|
We can conclude that our model achieves comparable performance to the previously published results for promoter prediction task. |
|
|
|
|