HLM_RLM / README.md
haneulpark's picture
Update README.md
0680592 verified
metadata
language: en
source_datasets: curated
license: mit
tags:
  - chemistry
  - toxicology
pretty_name: Human & Rat Liver Microsomal Stability
dataset_summary: >-
  Curation of databases of compounds for assessing human liver microsomes (HLM)
  stability and rat liver microsomes (RLM) stability.
citation: |-
  @article{
    author = {Longqiang Li, Zhou Lu, Guixia Liu, Yun Tang, and Weihua Li},
    doi = {10.1021/acs.chemrestox.2c00207},
    journal = {Chemical Research in Toxicology},
    number = {9},
    title = {In Silico Prediction of Human and Rat Liver Microsomal Stability via Machine Learning Methods},
    volume = {35},
    year = {2022},
    url = {https://pubs.acs.org/doi/10.1021/acs.chemrestox.2c00207},
    publisher = {American Chemical Society}
  }
size_categories:
  - 10K<n<100K
config_names:
  - HLM
  - RLM
  - Marketed_Drug
configs:
  - config_name: HLM
    data_files:
      - split: test
        path: HLM/test.csv
      - split: train
        path: HLM/train.csv
      - split: external
        path: HLM/external.csv
  - config_name: RLM
    data_files:
      - split: test
        path: RLM/test.csv
      - split: train
        path: RLM/train.csv
      - split: external
        path: RLM/external.csv
  - config_name: Marketed_Drug
dataset_info:
  - config_name: HLM
    features:
      - name: ID
        dtype: string
      - name: SMILES
        dtype: string
      - name: 'Y'
        dtype:
          class_label:
            names:
              '0': 0
              '1': 1
    splits:
      - name: train
        num_bytes: 190968
        num_examples: 4771
      - name: test
        num_bytes: 45368
        num_examples: 1131
      - name: external
        num_bytes: 4568
        num_examples: 111
  - config_name: RLM
    features:
      - name: ID
        dtype: string
      - name: SMILES
        dtype: string
      - name: 'Y'
        dtype:
          class_label:
            names:
              '0': 0
              '1': 1
    splits:
      - name: train
        num_bytes: 100608
        num_examples: 2512
      - name: test
        num_bytes: 23968
        num_examples: 596
      - name: external
        num_bytes: 99408
        num_examples: 2484
  - config_name: Marketed_Drug
    features:
      - name: SMILES
        dtype: string
      - name: Class
        dtype:
          class_label:
            names:
              '0': 0
              '1': 1
      - name: Online server predicted class
        dtype:
          class_label:
            names:
              '0': 0
              '1': 1
      - name: Our predicted class
        dtype:
          class_label:
            names:
              '0': 0
              '1': 1
task_categories:
  - tabular-classification

Human & Rat Liver Microsomal Stability

3345 RLM and 6420 HLM compounds were initially collected from the ChEMBL bioactivity database. (HLM ID: 613373, 2367379, and 612558; RLM ID: 613694, 2367428, and 612558) Finally, the RLM stability data set contains 3108 compounds, and the HLM stability data set contains 5902 compounds. For the RLM stability data set, 1542 (49.6%) compounds were classified as stable, and 1566 (50.4%) compounds were classified as unstable, among which the training and test sets contain 2512 and 596 compounds, respectively. The experimental data from the National Center for Advancing Translational Sciences (PubChem AID 1508591) were used as the RLM external set. For the HLM data set, 3799 (64%) compounds were classified as stable, and 2103 (36%) compounds were classified as unstable. In addition, an external set from Liu et al.12 was used to evaluate the predictive power of the HLM model.

The datasets uploaded to our Hugging Face repository are sanitized and reorganized versions. (We have sanitized the molecules from the original paper, using MolVS.)

Quickstart Usage

Load a dataset in python

Each subset can be loaded into python using the Huggingface datasets library. First, from the command line install the datasets library

$ pip install datasets

then, from within python load the datasets library

>>> import datasets

and load one of the HLM_RLM datasets, e.g.,

>>> HLM = datasets.load_dataset("maomlab/HLM_RLM", name = "HLM")
Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6.93k/6.93k [00:00<00:00, 280kB/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 680k/680k [00:00<00:00, 946kB/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 925k/925k [00:01<00:00, 634kB/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 39.7k/39.7k [00:00<00:00, 90.8kB/s]
Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1131/1131 [00:00<00:00, 20405.98 examples/s]
Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4771/4771 [00:00<00:00, 65495.46 examples/s]
Generating external split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 111/111 [00:00<00:00, 3651.94 examples/s]

and inspecting the loaded dataset

>>> HLM
HLM
DatasetDict({
  test: Dataset({
     features: ['ID','SMILES', 'Y'],
     num_rows: 1131
  })
  train: Dataset({
      features: ['ID','SMILES', 'Y'],
      num_rows: 4771
  }) 
  external: Dataset({
      features: ['ID','SMILES', 'Y'],
      num_rows: 111          
  })
})

Use a dataset to train a model

One way to use the dataset is through the MolFlux package developed by Exscientia. First, from the command line, install MolFlux library with catboost and rdkit support

pip install 'molflux[catboost,rdkit]'

then load, featurize, split, fit, and evaluate the a catboost model

import json
from datasets import load_dataset
from molflux.datasets import featurise_dataset
from molflux.features import load_from_dicts as load_representations_from_dicts
from molflux.splits import load_from_dict as load_split_from_dict
from molflux.modelzoo import load_from_dict as load_model_from_dict
from molflux.metrics import load_suite

split_dataset = load_dataset('maomlab/HLM_RLM', name = 'HLM')

split_featurised_dataset = featurise_dataset(
  split_dataset,
  column = "SMILES",
  representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))

model = load_model_from_dict({
    "name": "cat_boost_classifier",
    "config": {
        "x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'],
        "y_features": ['Y'],
    }})
    
model.train(split_featurised_dataset["train"])
preds = model.predict(split_featurised_dataset["test"])

classification_suite = load_suite("classification")

scores = classification_suite.compute(
    references=split_featurised_dataset["test"]['Y'],
    predictions=preds["cat_boost_classifier::Y"])        

Citation

Chem. Res. Toxicol. 2022, 35, 9, 1614–1624 Publication Date:September 2, 2022 https://doi.org/10.1021/acs.chemrestox.2c00207