How to use the data sets
Use the already preprocessed data
Load the dataset using
from datasets import load_dataset
dataset = load_dataset("jglaser/binding_affinity")
The file data/all.parquet
contains the preprocessed data. To extract it,
you need download and install [git LFS support] https://git-lfs.github.com/].
Pre-process yourself
To manually perform the preprocessing, fownload the data sets from
- BindingDB
In bindingdb
, download the database as tab separated values
https://bindingdb.org > Download > BindingDB_All_2021m4.tsv.zip
and extract the zip archive into bindingdb/data
Run the steps in bindingdb.ipynb
- PDBBind-cn
Register for an account at https://www.pdbbind.org.cn/, confirm the validation email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in pdbbind/data
Run the script pdbbind.py
in a compute job on an MPI-enabled cluster
(e.g., mpirun -n 64 pdbbind.py
).
Perform the steps in the notebook pdbbind.ipynb
- BindingMOAD
Go to https://bindingmoad.org and download the files every.csv
(All of Binding MOAD, Binding Data) and the non-redundant biounits
(nr_bind.zip
). Place and extract those files into binding_moad
.
Run the script moad.py
in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 moad.py).
Perform the steps in the notebook moad.ipynb
- BioLIP
Download from https://zhanglab.ccmb.med.umich.edu/BioLiP/ the files
- receptor_nr1.tar.bz2 (Receptor1, Non-redudant set)
- ligand_nr.tar.bz2 (Ligands)
- BioLiP_nr.tar.bz2 (Annotations)
and extract them in
biolip/data
.
Run the script biolip.py
in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 biolip.py).
Perform sthe steps in the notebook biolip.ipynb
- Final concatenation and filtering
Run the steps in the notebook combine_dbs.ipynb