binding_affinity / README.md
jglaser's picture
tweak script
1b1c7d1
|
raw
history blame
1.94 kB

How to use the data sets

Use the already preprocessed data

Load the dataset using

from datasets import load_dataset
dataset = load_dataset("jglaser/binding_affinity")

The file data/all.parquet contains the preprocessed data. To extract it, you need download and install [git LFS support] https://git-lfs.github.com/].

Pre-process yourself

To manually perform the preprocessing, fownload the data sets from

  1. BindingDB

In bindingdb, download the database as tab separated values https://bindingdb.org > Download > BindingDB_All_2021m4.tsv.zip and extract the zip archive into bindingdb/data

Run the steps in bindingdb.ipynb

  1. PDBBind-cn

Register for an account at https://www.pdbbind.org.cn/, confirm the validation email, then login and download

  • the Index files (1)
  • the general protein-ligand complexes (2)
  • the refined protein-ligand complexes (3)

Extract those files in pdbbind/data

Run the script pdbbind.py in a compute job on an MPI-enabled cluster (e.g., mpirun -n 64 pdbbind.py).

Perform the steps in the notebook pdbbind.ipynb

  1. BindingMOAD

Go to https://bindingmoad.org and download the files every.csv (All of Binding MOAD, Binding Data) and the non-redundant biounits (nr_bind.zip). Place and extract those files into binding_moad.

Run the script moad.py in a compute job on an MPI-enabled cluster (e.g., `mpirun -n 64 moad.py).

Perform the steps in the notebook moad.ipynb

  1. BioLIP

Download from https://zhanglab.ccmb.med.umich.edu/BioLiP/ the files

  • receptor_nr1.tar.bz2 (Receptor1, Non-redudant set)
  • ligand_nr.tar.bz2 (Ligands)
  • BioLiP_nr.tar.bz2 (Annotations) and extract them in biolip/data.

Run the script biolip.py in a compute job on an MPI-enabled cluster (e.g., `mpirun -n 64 biolip.py).

Perform sthe steps in the notebook biolip.ipynb

  1. Final concatenation and filtering

Run the steps in the notebook combine_dbs.ipynb