|
--- |
|
license: mit |
|
language: |
|
- en |
|
tags: |
|
- chemistry |
|
- medicinal chemistry |
|
pretty_name: AggregatorAdvisor |
|
size_categories: |
|
- 10K<n<100K |
|
dataset_summary: >- |
|
AggregatorAdvisor identifies molecules that are known to aggregate or may aggregate in biochemical assays. |
|
The approach is based on the chemical similarity to known aggregators, and physical properties. |
|
The AggregatorAdvisor dataset contains 12645 compounds from 20 different sources. |
|
citation: >- |
|
@article |
|
{Irwin2015, title = {An Aggregation Advisor for Ligand Discovery}, |
|
volume = {58}, ISSN = {1520-4804}, |
|
url = {http://dx.doi.org/10.1021/acs.jmedchem.5b01105}, |
|
DOI = {10.1021/acs.jmedchem.5b01105}, |
|
number = {17}, |
|
journal = {Journal of Medicinal Chemistry}, |
|
publisher = {American Chemical Society (ACS)}, |
|
author = {Irwin, John J. and Duan, Da and Torosyan, Hayarpi and Doak, Allison K. and |
|
Ziebart, Kristin T. and Sterling, Teague and Tumanian, Gurgen and Shoichet, Brian K.}, |
|
year = {2015}, |
|
month = aug, |
|
pages = {7076–7087} |
|
} |
|
config_names: |
|
- AggregatorAdvisor |
|
configs: |
|
- config_name: AggregatorAdvisor |
|
data_files: |
|
- split: test |
|
path: AggregatorAdvisor/test.csv |
|
- split: train |
|
path: AggregatorAdvisor/train.csv |
|
dataset_info: |
|
- config_name: AggregatorAdvisor |
|
features: |
|
- name: "new SMILES" |
|
dtype: string |
|
- name: "substance_id" |
|
dtype: string |
|
- name: "aggref_index" |
|
dtype: int64 |
|
- name: "logP" |
|
dtype: float64 |
|
- name: "reference" |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 404768 |
|
num_examples: 10116 |
|
- name: test |
|
num_bytes: 101288 |
|
num_examples: 2529 |
|
--- |
|
|
|
# Aggregator Advisor |
|
|
|
|
|
## Quickstart Usage |
|
|
|
### Load a dataset in python |
|
Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library. |
|
First, from the command line install the `datasets` library |
|
|
|
$ pip install datasets |
|
|
|
then, from within python load the datasets library |
|
|
|
>>> import datasets |
|
|
|
and load one of the `HematoxLong2023` datasets, e.g., |
|
|
|
>>> AggregatorAdvisor = datasets.load_dataset("maomlab/AggregatorAdvisor", name = "AggregatorAdvisor") |
|
Downloading readme: 100%|██████████| 4.70k/4.70k [00:00<00:00, 277kB/s] |
|
Downloading data: 100%|██████████| 530k/530k [00:00<00:00, 303kB/s] |
|
Downloading data: 100%|██████████| 2.16M/2.16M [00:00<00:00, 12.1MB/s] |
|
Generating test split: 100%|██████████| 2529/2529 [00:00<00:00, 29924.07 examples/s] |
|
Generating train split: 100%|██████████| 10116/10116 [00:00<00:00, 95081.99 examples/s] |
|
|
|
and inspecting the loaded dataset |
|
|
|
>>> AggregatorAdvisor |
|
DatasetDict({ |
|
test: Dataset({ |
|
features: ['new SMILES', 'substance_id', 'aggref_index', 'logP', 'reference'], |
|
num_rows: 2529 |
|
}) |
|
train: Dataset({ |
|
features: ['new SMILES', 'substance_id', 'aggref_index', 'logP', 'reference'], |
|
num_rows: 10116 |
|
}) |
|
}) |
|
|
|
### Use a dataset to train a model |
|
One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia. |
|
First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support |
|
|
|
pip install 'molflux[catboost,rdkit]' |
|
|
|
then load, featurize, split, fit, and evaluate the catboost model |
|
|
|
import json |
|
from datasets import load_dataset |
|
from molflux.datasets import featurise_dataset |
|
from molflux.features import load_from_dicts as load_representations_from_dicts |
|
from molflux.splits import load_from_dict as load_split_from_dict |
|
from molflux.modelzoo import load_from_dict as load_model_from_dict |
|
from molflux.metrics import load_suite |
|
|
|
Split and evaluate the catboost model |
|
|
|
split_dataset = load_dataset('maomlab/AggregatorAdvisor', name = 'AggregatorAdvisor') |
|
|
|
split_featurised_dataset = featurise_dataset( |
|
split_dataset, |
|
column = "new SMILES", |
|
representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}])) |
|
|
|
model = load_model_from_dict({ |
|
"name": "cat_boost_regressor", |
|
"config": { |
|
"x_features": ['new SMILES::morgan', 'new SMILES::maccs_rdkit'], |
|
"y_features": ['logP']}}) |
|
|
|
model.train(split_featurised_dataset["train"]) |
|
|
|
preds = model.predict(split_featurised_dataset["test"]) |
|
|
|
regression_suite = load_suite("regression") |
|
|
|
scores = regression_suite.compute( |
|
references=split_featurised_dataset["test"]['logP'], |
|
predictions=preds["cat_boost_regressor::logP"]) |
|
|
|
## Citation |
|
|