HematoxLong2023 / README.md
haneulpark's picture
Update README.md
ffb7e65 verified
|
raw
history blame
4.61 kB
metadata
tags:
  - chemistry
  - chemical information
pretty_name: Hematotoxicity Dataset
dataset_summary: >-
  The hematotoxicity dataset which contains a training set of 1330 molecules, a
  test set of 442 molecules,  and a validation set of 610 new molecules.
citation: |-
  @article{,
    author = {Teng-Zhi Long, Shao-Hua Shi, Shao Liu, Ai-Ping Lu, Zhao-Qian Liu, Min Li, Ting-Jun Hou*, and Dong-Sheng Cao},
    doi = {10.1021/acs.jcim.2c01088},
    journal = {Journal of Chemical Information and Modeling},
    number = {1},
    title = {Structural Analysis and Prediction of Hematotoxicity Using Deep Learning Approaches},
    volume = {63},
    year = {2023},
    url = {https://pubs.acs.org/doi/10.1021/acs.jcim.2c01088},
    publisher = {ACS publications}
  }
size_categories:
  - 1K<n<10K
config_names:
  - HematoxLong2022
configs:
  - config_name: HematoxLong2022
dataset_info:
  - name: canonical SMILES
    dtype: string
  - name: Label
    dtype:
      class_label:
        names:
          '0': 0
          '1': 1

Hematotoxicity Dataset (HematoxLong2022)

Initially, a total of 2670 compounds were collected and all molecules were subjected to the multistep data preparation process to ensure data quality. Finally, a hematotoxicity dataset containing 1772 chemicals was obtained, which includes a positive set with 589 molecules and a negative set with 1183 molecules. All of the molecules were divided into a training set of 1330 molecules and a test set of 442 molecules according to their Murcko scaffolds. Additionally, 610 new molecules from related research and databases were compiled as the external validation set.

Quickstart Usage

Load a dataset in python

Each subset can be loaded into python using the Huggingface datasets library. First, from the command line install the datasets library

$ pip install datasets

then, from within python load the datasets library

>>> import datasets

and load one of the HematoxLong2022 datasets, e.g.,

>>> HematoxLong2022 = datasets.load_dataset("maomlab/HematoxLong2022", name = "HematoxLong2022")
Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.24/4.24 [00:00<00:00, 35.1kkB/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 129/129 [00:00<00:00, 1.02MkB/s]
Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.33k/1.33k [00:00<00:00, 12.0Mexamples/s]
Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 442/442 [00:00<00:00, 3.68Mexamples/s]

and inspecting the loaded dataset

>>> HematoxLong2022
HematoxLong2022
DatasetDict({
  test: Dataset({
     features: ['canonical SMILES', 'label'],
     num_rows: 442
  })
  train: Dataset({
      features: ['canonical SMILES', 'label'],
      num_rows: 1330
  })
})

Use a dataset to train a model

One way to use the dataset is through the MolFlux package developed by Exscientia. First, from the command line, install MolFlux library with catboost and rdkit support

pip install 'molflux[catboost,rdkit]'

then load, featurize, split, fit, and evaluate the a catboost model

import json
from datasets import load_dataset
from molflux.datasets import featurise_dataset
from molflux.features import load_from_dicts as load_representations_from_dicts
from molflux.splits import load_from_dict as load_split_from_dict
from molflux.modelzoo import load_from_dict as load_model_from_dict
from molflux.metrics import load_suite

split_dataset = load_dataset('maomlab/HematoxLong2022', name = 'HematoxLong2022')

split_featurised_dataset = featurise_dataset(
  split_dataset,
  column = "SMILES",
  representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))

model = load_model_from_dict({
    "name": "cat_boost_classifier",
    "config": {
        "x_features": ['smiles::morgan', 'smiles::maccs_rdkit'],
        "y_features": ['0/1']}})

model.train(split_featurised_dataset["train"])
preds = model.predict(split_featurised_dataset["test"])

classification_suite = load_suite("classification")

scores = classification_suite.compute(
    references=split_featurised_dataset["test"]['0/1-'],
    predictions=preds["cat_boost_classifier::0/1-"])        

Citation

Cite this: J. Chem. Inf. Model. 2023, 63, 1, 111–125 Publication Date:December 6, 2022 https://doi.org/10.1021/acs.jcim.2c01088 Copyright Β© 2024 American Chemical Society