File size: 3,235 Bytes
98737de
 
8ee44dc
 
 
 
c76f9bd
 
 
 
98737de
518724d
 
 
 
 
3314357
518724d
 
 
c400d17
518724d
 
 
 
 
 
 
80eaba2
 
518724d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
918b935
518724d
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: cc-by-sa-4.0
tags:
- DNA
- biology
- genomics
- protein
- kmer
- cancer
- gleason-grade-group
---
## Project Description 
This repository contains the trained model for our paper: **Fine-tuning a Sentence Transformer for DNA & Protein tasks** that is currently under review at BMC Bioinformatics. This model, called **simcse-dna**; is based on the original implementation of **SimCSE [1]**. The original model was adapted for DNA downstream tasks by training it on a small sample size k-mer tokens generated from the human reference genome, and can be used to generate sentence embeddings for DNA tasks.

###  Prerequisites 
-----------
Please see the original [SimCSE](https://github.com/princeton-nlp/SimCSE) for installation details. The model will als be hosted on Zenodo (DOI: 10.5281/zenodo.11046580).

### Usage 

Run the following code to get the sentence embeddings:

```python 

import torch
from transformers import AutoModel, AutoTokenizer

# Import trained model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("dsfsi/simcse-dna")
model = AutoModel.from_pretrained("dsfsi/simcse-dna")


#sentences is your list of n DNA tokens of size 6 
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")

# Get the embeddings
with torch.no_grad():
    embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output


```
The retrieved embeddings can be utilized as input for a machine learning classifier to perform classification.

## Performance on evaluation tasks

Find out more about the datasets and access in the paper **(TBA)**

### Task 1: Detection of colorectal cancer cases (after oversampling)

|  | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 91 | 63 |
| Random Forest | **94** | **71** |
| XGBoost | 93 | 66 |
| CNN | 42 | 52 |

| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM |  91 | 66 |
| Random Forest |  **94** | **72** |
| XGBoost | 93 | 66 |
| CNN |  41 | 60 |

### Task 2: Prediction of the Gleason grade group (after oversampling)

|  | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 97 | 68 |
| Random Forest | **98** | **78** |
| XGBoost |97 | 70 |
| CNN |  35 |  50 |

| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM |  97 |  70 |
| Random Forest | **98** | **80** |
| XGBoost |97 | 70 |
| CNN |  33 | 59 |

### Task 3: Detection of human TATA sequences (after oversampling)

|  | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 98  | 93  |
| Random Forest | **99** | **96** |
| XGBoost |**99** | 95 |
| CNN | 38  | 59 |

| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM | 98 | 92 |
| Random Forest | **99** | **95** |
| XGBoost | **99** | 92 |
| CNN |  58 | 10 |


## Authors 
-----------

* Mpho Mokoatle, Vukosi Marivate, Darlington Mapiye, Riana Bornman, Vanessa M. Hayes
* Contact details : u19394277@tuks.co.za

## Citation 
-----------
Bibtex Reference **TBA**

### References

<a id="1">[1]</a> 
Gao, Tianyu, Xingcheng Yao, and Danqi Chen. "Simcse: Simple contrastive learning of sentence embeddings." arXiv preprint arXiv:2104.08821 (2021).