File size: 6,156 Bytes
de91d1e
 
5d23fb9
 
 
 
 
de91d1e
5d23fb9
 
08b9802
1282fb7
 
 
 
 
 
 
 
 
 
 
24a70e9
 
 
 
 
 
 
 
 
 
 
 
 
 
de91d1e
 
 
 
 
 
 
 
 
d699929
 
 
1282fb7
 
 
 
 
 
 
 
 
 
d699929
1282fb7
 
 
 
 
 
db4f4fa
 
 
f5197a3
 
7ddf4fc
 
80ad6d2
 
 
 
 
 
 
 
 
 
 
 
 
cf2e8b5
80ad6d2
 
7e04e23
 
 
 
 
80ad6d2
 
 
 
 
7e04e23
 
 
80ad6d2
7e04e23
 
 
 
 
80ad6d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7e04e23
80ad6d2
 
 
 
 
 
 
7e04e23
 
2a8000b
7e04e23
 
80ad6d2
25fc267
80ad6d2
7e04e23
 
 
 
 
 
80ad6d2
83bbb2c
 
 
80ad6d2
cf2e8b5
 
1282fb7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
license: mit
language:
- en
tags:
- chemistry
- medicinal chemistry
pretty_name: AggregatorAdvisor
size_categories:
- 10K<n<100K
dataset_summary: >-
  AggregatorAdvisor identifies molecules that are known to aggregate or may
  aggregate in biochemical assays based on the chemical similarity to known
  aggregators, and physical properties. In the default affinity range of 100 nM
  to 10 μM, if calculated LogP > 3 and Tc ≥ 85%, the user is informed that this
  compound should be investigated as an aggregator.  If either calculated LogP >
  3 or Tc > 85%, the user is warned that one of these two contributing criteria
  are in effect and that controls should be run.  If neither of these is true,
  this is reported, and the user is counseled that controls are always advised.
  The train and test datasets were created after sanitizing and splitting the
  original dataset in the paper below.
citation: |-
  @article 
    {Irwin2015, title = {An Aggregation Advisor for Ligand Discovery},
    volume = {58}, ISSN = {1520-4804}, 
    url = {http://dx.doi.org/10.1021/acs.jmedchem.5b01105}, 
    DOI = {10.1021/acs.jmedchem.5b01105}, 
    number = {17}, 
    journal = {Journal of Medicinal Chemistry}, 
    publisher = {American Chemical Society (ACS)}, 
    author = {Irwin, John J. and Duan,  Da and Torosyan,  Hayarpi and Doak,  Allison K. and
    Ziebart,  Kristin T. and Sterling,  Teague and Tumanian,  Gurgen and Shoichet,  Brian K.}, 
    year = {2015}, 
    month = aug, 
    pages = {7076–7087} 
    }
config_names:
- AggregatorAdvisor
configs:
- config_name: AggregatorAdvisor
  data_files:
  - split: test
    path: AggregatorAdvisor/test.csv
  - split: train
    path: AggregatorAdvisor/train.csv
dataset_info:
- config_name: AggregatorAdvisor
  features:
  - name: new SMILES
    dtype: string
  - name: substance_id
    dtype: string
  - name: aggref_index
    dtype: int64
  - name: logP
    dtype: float64
  - name: reference
    dtype: string
  splits:
  - name: train
    num_bytes: 404768
    num_examples: 10116
  - name: test
    num_bytes: 101288
    num_examples: 2529
---

# Aggregator Advisor 
The train and test datasets uploaded to our Hugging Face repository have been sanitized and split from the original dataset, which contains 12645 compounds.
If you want to try these processes with the original dataset, please follow the instructions in the [Processing Script.py](https://huggingface.co/datasets/maomlab/AggregatorAdvisor/blob/main/Preprocessing%20Script.py) file located in the AggregatorAdvisor.
The [raw_data.csv](https://huggingface.co/datasets/maomlab/AggregatorAdvisor/blob/main/raw_data.csv) is the original dataset from the paper, 
and the files in [AggregatorAdvisor](https://huggingface.co/datasets/maomlab/AggregatorAdvisor/tree/main/AggregatorAdvisor) are the sanitized version files that we made.

## Quickstart Usage

### Load a dataset in python
Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
First, from the command line install the `datasets` library

    $ pip install datasets

then, from within python load the datasets library

    >>> import datasets
   
and load one of the `AggregatorAdvisor` datasets, e.g.,

    >>> AggregatorAdvisor = datasets.load_dataset("maomlab/AggregatorAdvisor", name = "AggregatorAdvisor")
    Downloading readme: 100%|██████████| 4.70k/4.70k [00:00<00:00, 277kB/s]
    Downloading data: 100%|██████████| 530k/530k [00:00<00:00, 303kB/s]
    Downloading data: 100%|██████████| 2.16M/2.16M [00:00<00:00, 12.1MB/s]
    Generating test split: 100%|██████████| 2529/2529 [00:00<00:00, 29924.07 examples/s]
    Generating train split: 100%|██████████| 10116/10116 [00:00<00:00, 95081.99 examples/s]

and inspecting the loaded dataset

    >>> AggregatorAdvisor
    DatasetDict({
    test: Dataset({
        features: ['new SMILES', 'substance_id', 'aggref_index', 'logP', 'reference'],
        num_rows: 2529
    })
    train: Dataset({
        features: ['new SMILES', 'substance_id', 'aggref_index', 'logP', 'reference'],
        num_rows: 10116
    })
})

### Use a dataset to train a model
One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.
First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support

    pip install 'molflux[catboost,rdkit]'

then load, featurize, split, fit, and evaluate the catboost model

    import json
    from datasets import load_dataset
    from molflux.datasets import featurise_dataset
    from molflux.features import load_from_dicts as load_representations_from_dicts
    from molflux.splits import load_from_dict as load_split_from_dict
    from molflux.modelzoo import load_from_dict as load_model_from_dict
    from molflux.metrics import load_suite

Split and evaluate the catboost model
    
    split_dataset = load_dataset('maomlab/AggregatorAdvisor', name = 'AggregatorAdvisor')
    
    split_featurised_dataset = featurise_dataset(
      split_dataset,
      column = "new SMILES",
      representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))

    model = load_model_from_dict({
    "name": "cat_boost_regressor",
    "config": {
        "x_features": ['new SMILES::morgan', 'new SMILES::maccs_rdkit'],
        "y_features": ['logP']}})

    model.train(split_featurised_dataset["train"])
    
    preds = model.predict(split_featurised_dataset["test"])

    regression_suite = load_suite("regression")

    scores = regression_suite.compute(
        references=split_featurised_dataset["test"]['logP'],
        predictions=preds["cat_boost_regressor::logP"])    

### Data splits
Here we have used the `Realistic Split` method described in [(Martin et al., 2018)](https://doi.org/10.1021/acs.jcim.7b00166) to split the AggregatorAdvisor dataset.

## Citation
J. Med. Chem. 2015, 58, 17, 7076–7087
Publication Date:August 21, 2015
https://doi.org/10.1021/acs.jmedchem.5b01105