dotan1111 commited on
Commit
f5d9f6d
1 Parent(s): 0970eb9

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +52 -0
  2. tokenizer.json +0 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - biology
4
+ - bioinformatics
5
+ - tokenizers
6
+ ---
7
+ # Effect of Tokenization on Transformers for Biological Sequences
8
+ ## Abstract:
9
+ Deep learning models are transforming biological research. Many bioinformatics and comparative genomics algorithms analyze genomic data, either DNA or protein sequences. Examples include sequence alignments, phylogenetic tree inference and automatic classification of protein functions. Among these deep learning algorithms, models for processing natural languages, developed in the natural language processing (NLP) community, were recently applied to biological sequences. However, biological sequences are different than natural languages, such as English, and French, in which segmentation of the text to separate words is relatively straightforward. Moreover, biological sequences are characterized by extremely long sentences, which hamper their processing by current machine-learning models, notably the transformer architecture. In NLP, one of the first processing steps is to transform the raw text to a list of tokens. Deep-learning applications to biological sequence data mostly segment proteins and DNA to single characters. In this work, we study the effect of alternative tokenization algorithms on eight different tasks in biology, from predicting the function of proteins and their stability, through nucleotide sequence alignment, to classifying proteins to specific families. We demonstrate that applying alternative tokenization algorithms can increase accuracy and at the same time, substantially reduce the input length compared to the trivial tokenizer in which each character is a token. Furthermore, applying these tokenization algorithms allows interpreting trained models, taking into account dependencies among positions. Finally, we trained these tokenizers on a large dataset of protein sequences containing more than 400 billion amino acids, which resulted in over a three-fold decrease in the number of tokens. We then tested these tokenizers trained on large-scale data on the above specific tasks and showed that for some tasks it is highly beneficial to train database-specific tokenizers. Our study suggests that tokenizers are likely to be a critical component in future deep-network analysis of biological sequence data.
10
+
11
+ ![image](https://github.com/idotan286/BiologicalTokenizers/assets/58917533/d69893e2-7114-41a8-8d46-9b025b2d2840)
12
+
13
+ Different tokenization algorithms can be applied to biological sequences, as exemplified for the sequence “AAGTCAAGGATC”. (a) The baseline “words” tokenizer assumes a dictionary consisting of the nucleotides: “A”, “C”, “G” and “T”. The length of the encoded sequence is 12, i.e., the number of nucleotides; (b) The “pairs” tokenizer assumes a dictionary consisting of all possible nucleotide pairs. The length of the encoded sequences is typically halved; (c) A sophisticated dictionary consisting of only three tokens: “AAG”, “TC” and “GA”. The encoded sequence for this dictionary contains only five tokens.
14
+
15
+ ## Data:
16
+ The "data" folder contains the train, valid and test data of seven of the eight datasets used in the paper.
17
+
18
+ ## BFD Tokenizers:
19
+
20
+ We trained BPE, WordPiece and Unigram tokenizers on samples of proteins from the 2.2 billion protein sequences of the BFD dataset (Steinegger and Söding 2018). We evaluate the average sequences length as a function of the vocabulary size and number of sequences in the training data.
21
+
22
+ ![BFD_BPE_table](https://github.com/idotan286/BiologicalTokenizers/assets/58917533/710b7aa7-0dde-46bb-9ddf-39a84b579d71)
23
+ ![BFD_WPC_table](https://github.com/idotan286/BiologicalTokenizers/assets/58917533/8adfe5a7-25f5-4723-a87a-8598c6a76ff6)
24
+ ![BFD_UNI_table](https://github.com/idotan286/BiologicalTokenizers/assets/58917533/4462e782-0b21-4377-a5fe-309685141538)
25
+
26
+ Effect of vocabulary size and number of training samples on the three tokenizers: BPE, WordPiece and Unigram. The darker the color the higher the average number of tokens per protein. Increasing the vocabulary and the training size reduces the number of tokens per protein for all of the tested tokenizers.
27
+
28
+ We uploaded the "BFD_Tokenizers" which been trained on 10,000,000 sequences randomly sampled from the BFD datasset.
29
+
30
+ ## Github
31
+
32
+ The code, datasets and trained tokenizers are available on https://github.com/idotan286/BiologicalTokenizers/.
33
+
34
+ ## APA
35
+
36
+ ```
37
+ Dotan, E., Jaschek, G., Pupko, T., & Belinkov, Y. (2023). Effect of Tokenization on Transformers for Biological Sequences. bioRxiv. https://doi.org/10.1101/2023.08.15.553415
38
+ ```
39
+
40
+
41
+ ## BibTeX
42
+ ```
43
+ @article{Dotan_Effect_of_Tokenization_2023,
44
+ author = {Dotan, Edo and Jaschek, Gal and Pupko, Tal and Belinkov, Yonatan},
45
+ doi = {10.1101/2023.08.15.553415},
46
+ journal = {bioRxiv},
47
+ month = aug,
48
+ title = {{Effect of Tokenization on Transformers for Biological Sequences}},
49
+ year = {2023}
50
+ }
51
+
52
+ ```
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff