File size: 5,470 Bytes
d36299b
e7de196
 
3f18d7d
 
 
3cb4db3
 
3f18d7d
e7de196
 
 
 
 
f9caddd
 
 
 
 
 
 
 
 
 
 
3f18d7d
 
 
93efe01
f9caddd
93efe01
a5c1db8
 
93efe01
a5c1db8
93efe01
f9caddd
93efe01
79f5223
c714f54
e7de196
 
 
 
 
 
 
d080b10
e7de196
 
 
 
 
 
3cb4db3
c4fe533
 
93efe01
b5a1dbd
98266fc
c4fe533
 
b5a1dbd
14ad0c8
b5a1dbd
c4fe533
 
 
 
 
 
 
 
 
 
 
 
93efe01
c4fe533
93efe01
76e24ab
a9367ea
 
9efc2d7
 
c4fe533
 
 
93efe01
 
c4fe533
 
96c62f6
9efc2d7
c4fe533
 
96c62f6
9efc2d7
c869585
c4fe533
 
 
 
 
 
 
 
bb8c768
c4fe533
 
 
 
 
 
 
 
93c3891
bb8c768
c4fe533
a543f28
c4fe533
 
 
96c62f6
c4fe533
 
 
dac8dd4
c4fe533
96c62f6
4e48d81
dac8dd4
c4fe533
 
 
dac8dd4
c4fe533
dac8dd4
4e48d81
 
bed03b1
 
 
ffb7e65
 
 
51f4ea0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
license: mit
language: en
tags:
- chemistry
- chemical information
task_categories:
- tabular-classification
pretty_name: Hematotoxicity Dataset
dataset_summary: >-
  The hematotoxicity dataset consists of a training set with 1788 molecules and
  a test set with 594 molecules.  The train and test datasets were created after
  sanitizing and splitting the original dataset in the paper below.
citation: |-
  @article{,
    author = {Teng-Zhi Long, Shao-Hua Shi, Shao Liu, Ai-Ping Lu, Zhao-Qian Liu, Min Li, Ting-Jun Hou*, and Dong-Sheng Cao},
    doi = {10.1021/acs.jcim.2c01088},
    journal = {Journal of Chemical Information and Modeling},
    number = {1},
    title = {Structural Analysis and Prediction of Hematotoxicity Using Deep Learning Approaches},
    volume = {63},
    year = {2023},
    url = {https://pubs.acs.org/doi/10.1021/acs.jcim.2c01088},
    publisher = {ACS publications}
  }
size_categories:
- 1K<n<10K
config_names:
- HematoxLong2023
configs:
- config_name: HematoxLong2023
  data_files:
  - split: test
    path: HematoxLong2023/test.csv
  - split: train
    path: HematoxLong2023/train.csv
dataset_info:
- config_name: HematoxLong2023
  features:
  - name: "SMILES"
    dtype: string
  - name: "Label"            
    dtype:
      class_label:
        names:
          0: "negative"
          1: "positive"
  splits:
  - name: train
    num_bytes: 28736
    num_examples: 1788
  - name: test
    num_bytes: 9632
    num_examples: 594

---

# Hematotoxicity Dataset (HematoxLong2023)
A hematotoxicity dataset containing 1772 chemicals was obtained, which includes a positive set with 589 molecules and a negative set with 1183 molecules.
The molecules were divided into a training set of 1330 molecules and a test set of 442 molecules according to their Murcko scaffolds. 
Additionally, 610 new molecules from related research and databases were compiled as the external validation set.

The train and test datasets uploaded to our Hugging Face repository have been sanitized and split from the original dataset, which contains 2382 molecules. 
If you would like to try these processes with the original dataset, please follow the instructions in the [Preprocessing Script.py](https://huggingface.co/datasets/maomlab/HematoxLong2023/blob/main/Preprocessing%20Script.py) file located in the HematoxLong2023.

## Quickstart Usage

### Load a dataset in python
Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
First, from the command line install the `datasets` library

    $ pip install datasets

then, from within python load the datasets library

    >>> import datasets
   
and load one of the `HematoxLong2023` datasets, e.g.,

    >>> HematoxLong2023 = datasets.load_dataset("maomlab/HematoxLong2023", name = "HematoxLong2023")
    Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.23k/5.23k [00:00<00:00, 35.1kkB/s]
    Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 34.5k//34.5k/ [00:00<00:00, 155kB/s]
    Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 97.1k/97.1k [00:00<00:00, 587kB/s]
    Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 594/594 [00:00<00:00, 12705.92 examples/s]
    Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1788/1788 [00:00<00:00, 43895.91 examples/s]

and inspecting the loaded dataset

    >>> HematoxLong2023
    HematoxLong2023
    DatasetDict({
      test: Dataset({
         features: ['SMILES', 'label'],
         num_rows: 594
      })
      train: Dataset({
          features: ['SMILES', 'label'],
          num_rows: 1788
      })    
    })

### Use a dataset to train a model
One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.
First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support

    pip install 'molflux[catboost,rdkit]'

then load, featurize, split, fit, and evaluate the catboost model

    import json
    from datasets import load_dataset
    from molflux.datasets import featurise_dataset
    from molflux.features import load_from_dicts as load_representations_from_dicts
    from molflux.splits import load_from_dict as load_split_from_dict
    from molflux.modelzoo import load_from_dict as load_model_from_dict
    from molflux.metrics import load_suite

Split and evaluate the catboost model
    
    split_dataset = load_dataset('maomlab/HematoxLong2023', name = 'HematoxLong2023')
    
    split_featurised_dataset = featurise_dataset(
      split_dataset,
      column = "SMILES",
      representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))

    model = load_model_from_dict({
        "name": "cat_boost_classifier",
        "config": {
            "x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'],
            "y_features": ['Label']}})
    
    model.train(split_featurised_dataset["train"])
    preds = model.predict(split_featurised_dataset["test"])
    
    classification_suite = load_suite("classification")
    
    scores = classification_suite.compute(
        references=split_featurised_dataset["test"]['Label'],
        predictions=preds["cat_boost_classifier::Label"])        

## Citation
Cite this:
J. Chem. Inf. Model. 2023, 63, 1, 111–125
Publication Date:December 6, 2022
https://doi.org/10.1021/acs.jcim.2c01088
Copyright Β© 2024 American Chemical Society