File size: 1,352 Bytes
5ad5b1b 1d7245f bc9f870 d13e5f1 1d7245f d13e5f1 1d7245f 36d87d5 bc9f870 36d87d5 bc9f870 36d87d5 bc9f870 36d87d5 1d7245f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
tags:
- proteins
- molecules
- chemistry
- SMILES
- complex structures
---
## How to use the data sets
This dataset contains more about 36,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes from the PDB.
SMILES are assumed to be tokenized by the regex from P. Schwaller.
Every (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom
Every receptor coordinate maps onto the Calpha coordinate of that residue.
The dataset can be used to fine-tune a language model.
## Ligand selection criteria
Only ligands
- that have at least 3 atoms,
- a molecular weight >= 100 Da,
- and which are not among the 280 most common ligands in the PDB (this includes common additives like PEG, ADP, ..)
are considered.
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/pdb_protein_ligand_complexes",split='train[:90%]')
validation = load_dataset("jglaser/pdb_protein_ligand_complexes",split='train[90%:]')
```
### Manual update from PDB
```
# download the PDB archive into folder pdb/
sh rsync.sh 24 # number of parallel download processes
# extract sequences and coordinates in parallel
sbatch pdb.slurm
# or
mpirun -n 42 parse_complexes.py # desired number of tasks
```
|