yurakuratov commited on
Commit
5d6fcf5
·
verified ·
1 Parent(s): aaedaf4

readme: update bib entry and links - GENA in NAR

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -17,7 +17,7 @@ Differences between GENA-LM (`gena-lm-bert-base-t2t`) and DNABERT:
17
 
18
  Source code and data: https://github.com/AIRI-Institute/GENA_LM
19
 
20
- Paper: https://www.biorxiv.org/content/10.1101/2023.06.12.544594
21
 
22
  This repository also contains models that are finetuned on downstream tasks:
23
  - promoters predictions (branch [promoters_300_run_1](https://huggingface.co/AIRI-Institute/gena-lm-bert-base-t2t/tree/promoters_300_run_1))
@@ -83,20 +83,23 @@ GENA-LM (`gena-lm-bert-base-t2t`) model is trained in a masked language model (M
83
  We pre-trained `gena-lm-bert-base-t2t` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). The data was augmented by sampling mutations from 1000-genome SNPs (gnomAD dataset). Pre-training was performed for 2,100,000 iterations with batch size 256 and sequence length was equal to 512 tokens. We modified Transformer with [Pre-Layer normalization](https://arxiv.org/abs/2002.04745), but without the final layer LayerNorm.
84
 
85
  ## Evaluation
86
- For evaluation results, see our paper: https://www.biorxiv.org/content/10.1101/2023.06.12.544594v1
87
 
88
 
89
  ## Citation
90
  ```bibtex
91
  @article{GENA_LM,
92
- author = {Veniamin Fishman and Yuri Kuratov and Maxim Petrov and Aleksei Shmelev and Denis Shepelin and Nikolay Chekanov and Olga Kardymon and Mikhail Burtsev},
93
- title = {GENA-LM: A Family of Open-Source Foundational Models for Long DNA Sequences},
94
- elocation-id = {2023.06.12.544594},
95
- year = {2023},
96
- doi = {10.1101/2023.06.12.544594},
97
- publisher = {Cold Spring Harbor Laboratory},
98
- URL = {https://www.biorxiv.org/content/early/2023/06/13/2023.06.12.544594},
99
- eprint = {https://www.biorxiv.org/content/early/2023/06/13/2023.06.12.544594.full.pdf},
100
- journal = {bioRxiv}
 
 
 
101
  }
102
  ```
 
17
 
18
  Source code and data: https://github.com/AIRI-Institute/GENA_LM
19
 
20
+ Paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523
21
 
22
  This repository also contains models that are finetuned on downstream tasks:
23
  - promoters predictions (branch [promoters_300_run_1](https://huggingface.co/AIRI-Institute/gena-lm-bert-base-t2t/tree/promoters_300_run_1))
 
83
  We pre-trained `gena-lm-bert-base-t2t` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). The data was augmented by sampling mutations from 1000-genome SNPs (gnomAD dataset). Pre-training was performed for 2,100,000 iterations with batch size 256 and sequence length was equal to 512 tokens. We modified Transformer with [Pre-Layer normalization](https://arxiv.org/abs/2002.04745), but without the final layer LayerNorm.
84
 
85
  ## Evaluation
86
+ For evaluation results, see our paper: https://academic.oup.com/nar/article/53/2/gkae1310/7954523
87
 
88
 
89
  ## Citation
90
  ```bibtex
91
  @article{GENA_LM,
92
+ author = {Fishman, Veniamin and Kuratov, Yuri and Shmelev, Aleksei and Petrov, Maxim and Penzar, Dmitry and Shepelin, Denis and Chekanov, Nikolay and Kardymon, Olga and Burtsev, Mikhail},
93
+ title = {GENA-LM: a family of open-source foundational DNA language models for long sequences},
94
+ journal = {Nucleic Acids Research},
95
+ volume = {53},
96
+ number = {2},
97
+ pages = {gkae1310},
98
+ year = {2025},
99
+ month = {01},
100
+ issn = {0305-1048},
101
+ doi = {10.1093/nar/gkae1310},
102
+ url = {https://doi.org/10.1093/nar/gkae1310},
103
+ eprint = {https://academic.oup.com/nar/article-pdf/53/2/gkae1310/61443229/gkae1310.pdf},
104
  }
105
  ```