{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9AGV4FM3znB2"
      },
      "source": [
        "# Introduction to GROVER\n",
        "\n",
        "In this tutorial, we will go over what Grover is, and how to get it up and running.\n",
        "\n",
        "GROVER, or, Graph Representation frOm selfsuperVised mEssage passing tRansformer, is a novel framework proposed by Tencent AI Lab. GROVER utilizes self-supervised tasks in the node, edge and graph level in order to learn rich structural and semantic information of molecules from large unlabelled molecular datasets. GROVER integrates Message Passing Networks into a Transformer-style architecture to deliver more expressive molecular encoding. \n",
        "\n",
        "Reference Paper: [Rong, Yu, et al. \"Grover: Self-supervised message passing transformer on large-scale molecular data.\" Advances in Neural Information Processing Systems (2020).](https://drug.ai.tencent.com/publications/GROVER.pdf)\n",
        "\n",
        "## Colab\n",
        "\n",
        "This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.\n",
        "\n",
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/Introduction_to_GROVER.ipynb)\n",
        "\n",
        "## Setup\n",
        "\n",
        "To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gySfXfHP15A2"
      },
      "source": [
        "## Import and Setup required modules.\n",
        "We will first clone the repository onto the preferred platform, then install it as a library. We will also import deepchem and install descriptastorus.\n",
        "\n",
        "NOTE: The [original GROVER repository](https://github.com/tencent-ailab/grover) does not contain a `setup.py` file, thus we are currently using a fork which does."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Hc9dofL-kbyf",
        "outputId": "7c8001e2-a269-400c-9c18-0763754c37c5"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/content/drive/MyDrive\n",
            "fatal: destination path 'grover' already exists and is not an empty directory.\n"
          ]
        }
      ],
      "source": [
        "# Clone the forked repository.\n",
        "%cd drive/MyDrive\n",
        "!git clone https://github.com/atreyamaj/grover.git"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NsAZo_sz5nRv",
        "outputId": "75038385-b3cd-42b8-943f-927b2557fb5c"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/content/drive/MyDrive/grover\n"
          ]
        }
      ],
      "source": [
        "# Navigate to the working folder.\n",
        "%cd grover"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2h08GD5foTRW",
        "outputId": "4138c9f6-e355-461b-fbda-aabb212fa6ec"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Obtaining file:///content/drive/MyDrive/grover\n",
            "Installing collected packages: grover\n",
            "  Running setup.py develop for grover\n",
            "Successfully installed grover-1.0.0\n"
          ]
        }
      ],
      "source": [
        "# Install the forked repository.\n",
        "!pip install -e ./"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dai45voQm4yp",
        "outputId": "d931202c-9746-4943-a894-5e10977f21cc"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Collecting deepchem\n",
            "  Downloading deepchem-2.6.1-py3-none-any.whl (608 kB)\n",
            "\u001b[?25l\r\u001b[K     |▌                               | 10 kB 29.8 MB/s eta 0:00:01\r\u001b[K     |█                               | 20 kB 34.5 MB/s eta 0:00:01\r\u001b[K     |█▋                              | 30 kB 37.0 MB/s eta 0:00:01\r\u001b[K     |██▏                             | 40 kB 20.6 MB/s eta 0:00:01\r\u001b[K     |██▊                             | 51 kB 23.0 MB/s eta 0:00:01\r\u001b[K     |███▎                            | 61 kB 25.9 MB/s eta 0:00:01\r\u001b[K     |███▊                            | 71 kB 23.6 MB/s eta 0:00:01\r\u001b[K     |████▎                           | 81 kB 24.8 MB/s eta 0:00:01\r\u001b[K     |████▉                           | 92 kB 26.6 MB/s eta 0:00:01\r\u001b[K     |█████▍                          | 102 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████                          | 112 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████▌                         | 122 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████                         | 133 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████▌                        | 143 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████                        | 153 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████▋                       | 163 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████▏                      | 174 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████▊                      | 184 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████▎                     | 194 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████▊                     | 204 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████▎                    | 215 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████▉                    | 225 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████▍                   | 235 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████                   | 245 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████▌                  | 256 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████                  | 266 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████▌                 | 276 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████                 | 286 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████▋                | 296 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████▏               | 307 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████▊               | 317 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████▎              | 327 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████▊              | 337 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████████▎             | 348 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████████▉             | 358 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████████▍            | 368 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████████            | 378 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████████▌           | 389 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████████           | 399 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████████▌          | 409 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████████████          | 419 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████████████▋         | 430 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████████████▏        | 440 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████████████▊        | 450 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████████████▎       | 460 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████████████▉       | 471 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████████████▎      | 481 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████████████▉      | 491 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████████████████▍     | 501 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████████████████     | 512 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████████████████▌    | 522 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████████████████    | 532 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████████████████▌   | 542 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████████████████   | 552 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |█████████████████████████████▋  | 563 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████████████████████▏ | 573 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |██████████████████████████████▊ | 583 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████████████████████▎| 593 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |███████████████████████████████▉| 604 kB 28.3 MB/s eta 0:00:01\r\u001b[K     |████████████████████████████████| 608 kB 28.3 MB/s \n",
            "\u001b[?25hRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from deepchem) (1.0.2)\n",
            "Requirement already satisfied: numpy>=1.21 in /usr/local/lib/python3.7/dist-packages (from deepchem) (1.21.6)\n",
            "Collecting rdkit-pypi\n",
            "  Downloading rdkit_pypi-2022.3.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.5 MB)\n",
            "\u001b[K     |████████████████████████████████| 22.5 MB 1.4 MB/s \n",
            "\u001b[?25hRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from deepchem) (1.4.1)\n",
            "Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from deepchem) (1.3.5)\n",
            "Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from deepchem) (1.1.0)\n",
            "Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas->deepchem) (2022.1)\n",
            "Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->deepchem) (2.8.2)\n",
            "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->deepchem) (1.15.0)\n",
            "Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from rdkit-pypi->deepchem) (7.1.2)\n",
            "Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepchem) (3.1.0)\n",
            "Installing collected packages: rdkit-pypi, deepchem\n",
            "Successfully installed deepchem-2.6.1 rdkit-pypi-2022.3.1\n",
            "Collecting git+https://github.com/bp-kelley/descriptastorus\n",
            "  Cloning https://github.com/bp-kelley/descriptastorus to /tmp/pip-req-build-_462lldf\n",
            "  Running command git clone -q https://github.com/bp-kelley/descriptastorus /tmp/pip-req-build-_462lldf\n",
            "Collecting pandas_flavor\n",
            "  Downloading pandas_flavor-0.3.0-py3-none-any.whl (6.3 kB)\n",
            "Requirement already satisfied: xarray in /usr/local/lib/python3.7/dist-packages (from pandas_flavor->descriptastorus==2.3.0.6) (0.18.2)\n",
            "  Downloading pandas_flavor-0.2.0-py2.py3-none-any.whl (6.6 kB)\n",
            "Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from pandas_flavor->descriptastorus==2.3.0.6) (1.3.5)\n",
            "Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->pandas_flavor->descriptastorus==2.3.0.6) (2.8.2)\n",
            "Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas->pandas_flavor->descriptastorus==2.3.0.6) (2022.1)\n",
            "Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.7/dist-packages (from pandas->pandas_flavor->descriptastorus==2.3.0.6) (1.21.6)\n",
            "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->pandas_flavor->descriptastorus==2.3.0.6) (1.15.0)\n",
            "Requirement already satisfied: setuptools>=40.4 in /usr/local/lib/python3.7/dist-packages (from xarray->pandas_flavor->descriptastorus==2.3.0.6) (57.4.0)\n",
            "Building wheels for collected packages: descriptastorus\n",
            "  Building wheel for descriptastorus (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Created wheel for descriptastorus: filename=descriptastorus-2.3.0.6-py3-none-any.whl size=60704 sha256=10872f9972ee502829c712449b7dbd8d54717461dce2fdffe495f21e10044446\n",
            "  Stored in directory: /tmp/pip-ephem-wheel-cache-k9kvyu6l/wheels/f9/c3/4f/e7d01f4f2f1a89aef8f0ef088beb4a94976324f3ee21410b10\n",
            "Successfully built descriptastorus\n",
            "Installing collected packages: pandas-flavor, descriptastorus\n",
            "Successfully installed descriptastorus-2.3.0.6 pandas-flavor-0.2.0\n"
          ]
        }
      ],
      "source": [
        "# Install deepchem and descriptastorus.\n",
        "!pip install deepchem\n",
        "!pip install git+https://github.com/bp-kelley/descriptastorus"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wRtkZuD23IVu"
      },
      "source": [
        "## Extracting semantic motif labels\n",
        "The semantic motif label is extracted by `scripts/save_feature.py` with feature generator `fgtasklabel`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "altfeS6Dlfa-",
        "outputId": "558d2ea2-abd9-4b50-b393-372691735c24"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "100% 5970/5970 [00:09<00:00, 620.91it/s]\n"
          ]
        }
      ],
      "source": [
        "!python scripts/save_features.py --data_path exampledata/pretrain/tryout.csv  \\\n",
        "                                --save_path exampledata/pretrain/tryout.npz   \\\n",
        "                                --features_generator fgtasklabel \\\n",
        "                                --restart"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oPiSHz7m3UME"
      },
      "source": [
        "## Extracting atom/bond contextual properties (vocabulary)\n",
        "The atom/bond Contextual Property (Vocabulary) is extracted by `scripts/build_vocab.py`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3HJGcjwSlqey",
        "outputId": "834c285f-8cb6-4dac-ad7c-b77662cbbc36"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "Building atom vocab from file: exampledata/pretrain/tryout.csv\n",
            "50000it [00:04, 10946.14it/s]\n",
            "atom vocab size 324\n",
            "Building bond vocab from file: exampledata/pretrain/tryout.csv\n",
            "50000it [00:16, 3094.21it/s]\n",
            "bond vocab size 353\n"
          ]
        }
      ],
      "source": [
        "!python scripts/build_vocab.py --data_path exampledata/pretrain/tryout.csv  \\\n",
        "                             --vocab_save_folder exampledata/pretrain  \\\n",
        "                             --dataset_name tryout"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8s_crxb33hzD"
      },
      "source": [
        "## Splitting the data\n",
        "To accelerate the data loading and reduce the memory cost in the multi-gpu pretraining scenario, the unlabelled molecular data need to be spilt into several parts using `scripts/split_data.py`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ofBiGAV8nhFE",
        "outputId": "c015d375-2f0e-4890-d8b1-d136cb920b4a"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "Number of files: 60\n"
          ]
        }
      ],
      "source": [
        "!python scripts/split_data.py --data_path exampledata/pretrain/tryout.csv  \\\n",
        "                             --features_path exampledata/pretrain/tryout.npz  \\\n",
        "                             --sample_per_file 100  \\\n",
        "                             --output_path exampledata/pretrain/tryout"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f7tPSzcd4Iu4"
      },
      "source": [
        "## Running Pretraining on Single GPU"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "PBRyJjD4oijD",
        "outputId": "fb256173-08ca-48b9-81f8-9ff151595fcd"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "[WARNING] Horovod cannot be imported; multi-GPU training is unsupported\n",
            "Namespace(activation='PReLU', atom_vocab_path='exampledata/pretrain/tryout_atom_vocab.pkl', backbone='gtrans', batch_size=32, bias=False, bond_drop_rate=0, bond_vocab_path='exampledata/pretrain/tryout_bond_vocab.pkl', cuda=True, data_path='exampledata/pretrain/tryout', dense=False, depth=5, dist_coff=0.1, dropout=0.1, embedding_output_type='both', enable_multi_gpu=False, epochs=3, fg_label_path=None, final_lr=0.0001, fine_tune_coff=1, hidden_size=100, init_lr=0.0002, max_lr=0.0004, no_cache=True, num_attn_head=1, num_mt_block=1, parser_name='pretrain', save_dir='model/tryout', save_interval=9999999999, undirected=False, warmup_epochs=2.0, weight_decay=1e-07)\n",
            "Loading data\n",
            "Loading data:\n",
            "Number of files: 60\n",
            "Number of samples: 5970\n",
            "Samples/file: 100\n",
            "Splitting data with seed 0.\n",
            "Total size = 5,970 | train size = 5,400 | val size = 570\n",
            "atom vocab size: 324, bond vocab size: 353, Number of FG tasks: 85\n",
            "Pre-loaded test data: 6\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 12 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Restore checkpoint, current epoch: 2\n",
            "GROVEREmbedding(\n",
            "  (encoders): GTransEncoder(\n",
            "    (edge_blocks): ModuleList(\n",
            "      (0): MTBlock(\n",
            "        (heads): ModuleList(\n",
            "          (0): Head(\n",
            "            (mpn_q): MPNEncoder(\n",
            "              (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "              (act_func): PReLU(num_parameters=1)\n",
            "              (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "            )\n",
            "            (mpn_k): MPNEncoder(\n",
            "              (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "              (act_func): PReLU(num_parameters=1)\n",
            "              (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "            )\n",
            "            (mpn_v): MPNEncoder(\n",
            "              (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "              (act_func): PReLU(num_parameters=1)\n",
            "              (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "            )\n",
            "          )\n",
            "        )\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "        (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "        (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (W_i): Linear(in_features=165, out_features=100, bias=False)\n",
            "        (attn): MultiHeadedAttention(\n",
            "          (linear_layers): ModuleList(\n",
            "            (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "            (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "            (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "          )\n",
            "          (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (attention): Attention()\n",
            "          (dropout): Dropout(p=0.1, inplace=False)\n",
            "        )\n",
            "        (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "        (sublayer): SublayerConnection(\n",
            "          (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (dropout): Dropout(p=0.1, inplace=False)\n",
            "        )\n",
            "      )\n",
            "    )\n",
            "    (node_blocks): ModuleList(\n",
            "      (0): MTBlock(\n",
            "        (heads): ModuleList(\n",
            "          (0): Head(\n",
            "            (mpn_q): MPNEncoder(\n",
            "              (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "              (act_func): PReLU(num_parameters=1)\n",
            "              (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "            )\n",
            "            (mpn_k): MPNEncoder(\n",
            "              (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "              (act_func): PReLU(num_parameters=1)\n",
            "              (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "            )\n",
            "            (mpn_v): MPNEncoder(\n",
            "              (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "              (act_func): PReLU(num_parameters=1)\n",
            "              (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "            )\n",
            "          )\n",
            "        )\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "        (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "        (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (W_i): Linear(in_features=151, out_features=100, bias=False)\n",
            "        (attn): MultiHeadedAttention(\n",
            "          (linear_layers): ModuleList(\n",
            "            (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "            (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "            (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "          )\n",
            "          (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (attention): Attention()\n",
            "          (dropout): Dropout(p=0.1, inplace=False)\n",
            "        )\n",
            "        (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "        (sublayer): SublayerConnection(\n",
            "          (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (dropout): Dropout(p=0.1, inplace=False)\n",
            "        )\n",
            "      )\n",
            "    )\n",
            "    (ffn_atom_from_atom): PositionwiseFeedForward(\n",
            "      (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "      (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "      (act_func): PReLU(num_parameters=1)\n",
            "    )\n",
            "    (ffn_atom_from_bond): PositionwiseFeedForward(\n",
            "      (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "      (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "      (act_func): PReLU(num_parameters=1)\n",
            "    )\n",
            "    (ffn_bond_from_atom): PositionwiseFeedForward(\n",
            "      (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "      (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "      (act_func): PReLU(num_parameters=1)\n",
            "    )\n",
            "    (ffn_bond_from_bond): PositionwiseFeedForward(\n",
            "      (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "      (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "      (act_func): PReLU(num_parameters=1)\n",
            "    )\n",
            "    (atom_from_atom_sublayer): SublayerConnection(\n",
            "      (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "    )\n",
            "    (atom_from_bond_sublayer): SublayerConnection(\n",
            "      (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "    )\n",
            "    (bond_from_atom_sublayer): SublayerConnection(\n",
            "      (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "    )\n",
            "    (bond_from_bond_sublayer): SublayerConnection(\n",
            "      (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "      (dropout): Dropout(p=0.1, inplace=False)\n",
            "    )\n",
            "    (act_func_node): PReLU(num_parameters=1)\n",
            "    (act_func_edge): PReLU(num_parameters=1)\n",
            "    (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "  )\n",
            ")\n",
            "Total parameters: 768614\n",
            "EP:3 Model Saved on: model/tryout/model.ep3\n",
            "Total Time: 14.828\n"
          ]
        }
      ],
      "source": [
        "!python main.py pretrain \\\n",
        "               --data_path exampledata/pretrain/tryout \\\n",
        "               --save_dir model/tryout \\\n",
        "               --atom_vocab_path exampledata/pretrain/tryout_atom_vocab.pkl \\\n",
        "               --bond_vocab_path exampledata/pretrain/tryout_bond_vocab.pkl \\\n",
        "               --batch_size 32 \\\n",
        "               --dropout 0.1 \\\n",
        "               --depth 5 \\\n",
        "               --num_attn_head 1 \\\n",
        "               --hidden_size 100 \\\n",
        "               --epochs 3 \\\n",
        "               --init_lr 0.0002 \\\n",
        "               --max_lr 0.0004 \\\n",
        "               --final_lr 0.0001 \\\n",
        "               --weight_decay 0.0000001 \\\n",
        "               --activation PReLU \\\n",
        "               --backbone gtrans \\\n",
        "               --embedding_output_type both"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t485Nwxt4QXL"
      },
      "source": [
        "# Training and Finetuning\n",
        "\n",
        "##Extracting Molecular Features\n",
        "\n",
        "Given a labelled molecular dataset, it is possible to extract the additional molecular features in order to train & finetune the model from the existing pretrained model. The feature matrix is stored as `.npz`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Y4KUfRvFomCG",
        "outputId": "5d55f668-4675-47e5-af9c-94d3a5c04a63"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:21] WARNING: not removing hydrogen atom without neighbors\n",
            "  0% 6/2039 [00:01<06:41,  5.07it/s][21:04:23] WARNING: not removing hydrogen atom without neighbors\n",
            "  3% 53/2039 [00:02<00:56, 35.31it/s][21:04:24] WARNING: not removing hydrogen atom without neighbors\n",
            "  5% 95/2039 [00:04<01:22, 23.62it/s][21:04:26] WARNING: not removing hydrogen atom without neighbors\n",
            "  6% 115/2039 [00:04<00:50, 38.29it/s][21:04:27] WARNING: not removing hydrogen atom without neighbors\n",
            "  7% 152/2039 [00:06<01:23, 22.62it/s][21:04:28] WARNING: not removing hydrogen atom without neighbors\n",
            "  8% 169/2039 [00:07<01:46, 17.53it/s][21:04:29] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:29] WARNING: not removing hydrogen atom without neighbors\n",
            " 10% 198/2039 [00:08<01:02, 29.58it/s][21:04:30] WARNING: not removing hydrogen atom without neighbors\n",
            " 13% 268/2039 [00:11<01:06, 26.67it/s][21:04:33] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:33] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:33] WARNING: not removing hydrogen atom without neighbors\n",
            " 15% 306/2039 [00:12<01:08, 25.31it/s][21:04:34] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:34] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:34] WARNING: not removing hydrogen atom without neighbors\n",
            " 18% 372/2039 [00:16<03:04,  9.03it/s][21:04:38] WARNING: not removing hydrogen atom without neighbors\n",
            " 21% 424/2039 [00:16<00:46, 34.78it/s][21:04:39] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:39] WARNING: not removing hydrogen atom without neighbors\n",
            " 23% 471/2039 [00:18<00:51, 30.55it/s][21:04:40] WARNING: not removing hydrogen atom without neighbors\n",
            " 26% 528/2039 [00:20<00:56, 26.79it/s][21:04:42] WARNING: not removing hydrogen atom without neighbors\n",
            " 27% 556/2039 [00:21<00:58, 25.56it/s]WARNING: not removing hydrogen atom without neighbors\n",
            " 28% 571/2039 [00:22<01:09, 21.01it/s][21:04:44] WARNING: not removing hydrogen atom without neighbors\n",
            " 29% 598/2039 [00:23<01:02, 23.22it/s][21:04:45] WARNING: not removing hydrogen atom without neighbors\n",
            " 30% 619/2039 [00:24<00:50, 28.33it/s][21:04:45] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:45] WARNING: not removing hydrogen atom without neighbors\n",
            " 32% 645/2039 [00:25<00:52, 26.57it/s][21:04:47] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:47] WARNING: not removing hydrogen atom without neighbors\n",
            " 34% 689/2039 [00:27<00:53, 25.23it/s][21:04:48] WARNING: not removing hydrogen atom without neighbors\n",
            " 35% 713/2039 [00:28<01:02, 21.14it/s][21:04:50] WARNING: not removing hydrogen atom without neighbors\n",
            " 39% 800/2039 [00:32<02:30,  8.23it/s]WARNING: not removing hydrogen atom without neighbors\n",
            " 40% 813/2039 [00:33<01:26, 14.10it/s][21:04:55] WARNING: not removing hydrogen atom without neighbors\n",
            " 44% 901/2039 [00:36<00:45, 24.82it/s][21:04:58] WARNING: not removing hydrogen atom without neighbors\n",
            " 44% 905/2039 [00:36<00:46, 24.15it/s][21:04:58] WARNING: not removing hydrogen atom without neighbors\n",
            " 45% 908/2039 [00:37<01:08, 16.55it/s][21:04:59] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:59] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:04:59] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:05:00] WARNING: not removing hydrogen atom without neighbors\n",
            " 50% 1010/2039 [00:40<00:30, 33.63it/s][21:05:02] WARNING: not removing hydrogen atom without neighbors\n",
            " 51% 1039/2039 [00:41<00:34, 28.94it/s][21:05:03] WARNING: not removing hydrogen atom without neighbors\n",
            " 56% 1134/2039 [00:45<00:33, 26.85it/s][21:05:07] WARNING: not removing hydrogen atom without neighbors\n",
            " 56% 1150/2039 [00:46<00:37, 23.89it/s][21:05:07] WARNING: not removing hydrogen atom without neighbors\n",
            " 57% 1161/2039 [00:46<00:29, 29.93it/s][21:05:08] WARNING: not removing hydrogen atom without neighbors\n",
            " 57% 1168/2039 [00:46<00:35, 24.50it/s]WARNING: not removing hydrogen atom without neighbors\n",
            " 58% 1186/2039 [00:47<00:24, 34.22it/s][21:05:09] WARNING: not removing hydrogen atom without neighbors\n",
            " 58% 1192/2039 [00:47<00:21, 39.24it/s][21:05:09] WARNING: not removing hydrogen atom without neighbors\n",
            " 61% 1235/2039 [00:49<00:34, 23.35it/s][21:05:11] WARNING: not removing hydrogen atom without neighbors\n",
            " 62% 1264/2039 [00:50<00:32, 23.94it/s][21:05:12] WARNING: not removing hydrogen atom without neighbors\n",
            " 62% 1268/2039 [00:50<00:29, 25.95it/s][21:05:12] WARNING: not removing hydrogen atom without neighbors\n",
            " 62% 1273/2039 [00:50<00:25, 29.91it/s][21:05:12] WARNING: not removing hydrogen atom without neighbors\n",
            " 63% 1289/2039 [00:51<00:35, 21.19it/s][21:05:13] WARNING: not removing hydrogen atom without neighbors\n",
            " 63% 1294/2039 [00:51<00:30, 24.51it/s][21:05:13] WARNING: not removing hydrogen atom without neighbors\n",
            " 64% 1297/2039 [00:51<00:30, 24.42it/s][21:05:13] WARNING: not removing hydrogen atom without neighbors\n",
            " 64% 1308/2039 [00:52<00:29, 24.50it/s]\n",
            " 65% 1318/2039 [00:52<00:31, 23.25it/s][21:05:14] WARNING: not removing hydrogen atom without neighbors\n",
            " 66% 1339/2039 [00:53<00:19, 35.84it/s][21:05:14] WARNING: not removing hydrogen atom without neighbors\n",
            " 66% 1354/2039 [00:53<00:27, 25.00it/s][21:05:15] WARNING: not removing hydrogen atom without neighbors\n",
            " 68% 1384/2039 [00:55<00:30, 21.15it/s][21:05:16] WARNING: not removing hydrogen atom without neighbors\n",
            " 71% 1443/2039 [00:57<00:26, 22.41it/s][21:05:18] WARNING: not removing hydrogen atom without neighbors\n",
            " 71% 1451/2039 [00:57<00:22, 26.27it/s][21:05:19] WARNING: not removing hydrogen atom without neighbors\n",
            " 72% 1467/2039 [00:58<00:29, 19.33it/s][21:05:20] WARNING: not removing hydrogen atom without neighbors\n",
            " 72% 1471/2039 [00:58<00:26, 21.24it/s][21:05:20] WARNING: not removing hydrogen atom without neighbors\n",
            " 77% 1568/2039 [01:02<00:20, 23.17it/s][21:05:23] WARNING: not removing hydrogen atom without neighbors\n",
            " 78% 1591/2039 [01:02<00:20, 22.13it/s][21:05:24] WARNING: not removing hydrogen atom without neighbors[21:05:24] \n",
            "WARNING: not removing hydrogen atom without neighbors\n",
            "[21:05:24] WARNING: not removing hydrogen atom without neighbors\n",
            " 79% 1616/2039 [01:03<00:15, 26.82it/s][21:05:25] WARNING: not removing hydrogen atom without neighbors\n",
            " 79% 1620/2039 [01:04<00:19, 21.82it/s][21:05:25] WARNING: not removing hydrogen atom without neighbors\n",
            " 82% 1665/2039 [01:05<00:11, 31.94it/s][21:05:27] WARNING: not removing hydrogen atom without neighbors\n",
            " 82% 1671/2039 [01:05<00:12, 29.68it/s][21:05:27] WARNING: not removing hydrogen atom without neighbors\n",
            " 83% 1685/2039 [01:06<00:17, 19.79it/s][21:05:28] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:05:28] WARNING: not removing hydrogen atom without neighbors\n",
            " 83% 1689/2039 [01:06<00:17, 20.30it/s][21:05:28] WARNING: not removing hydrogen atom without neighbors\n",
            " 84% 1713/2039 [01:07<00:14, 22.54it/s][21:05:29] WARNING: not removing hydrogen atom without neighbors\n",
            " 85% 1731/2039 [01:08<00:12, 25.18it/s][21:05:30] WARNING: not removing hydrogen atom without neighbors\n",
            " 87% 1766/2039 [01:09<00:08, 32.14it/s][21:05:31] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:05:31] WARNING: not removing hydrogen atom without neighbors\n",
            " 88% 1792/2039 [01:10<00:09, 25.06it/s][21:05:32] WARNING: not removing hydrogen atom without neighbors\n",
            " 92% 1870/2039 [01:13<00:05, 30.02it/s][21:05:35] WARNING: not removing hydrogen atom without neighbors\n",
            " 93% 1896/2039 [01:14<00:05, 27.70it/s][21:05:36] WARNING: not removing hydrogen atom without neighbors\n",
            " 95% 1947/2039 [01:16<00:03, 24.29it/s][21:05:38] WARNING: not removing hydrogen atom without neighbors\n",
            " 97% 1976/2039 [01:17<00:02, 29.55it/s][21:05:39] WARNING: not removing hydrogen atom without neighbors\n",
            "100% 2039/2039 [01:19<00:00, 25.67it/s]\n"
          ]
        }
      ],
      "source": [
        "!python scripts/save_features.py --data_path exampledata/finetune/bbbp.csv \\\n",
        "                                --save_path exampledata/finetune/bbbp.npz \\\n",
        "                                --features_generator rdkit_2d_normalized \\\n",
        "                                --restart "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "l8DzWh_o4bvl"
      },
      "source": [
        "## Finetuning with existing data\n",
        "Given the labelled dataset and the molecular features, we can use `finetune` function to finetune the pretrained model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "gYrbgvCmpNrX",
        "outputId": "c18c267d-bc5f-4dbf-e9e2-024dac070fa4"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "[WARNING] Horovod cannot be imported; multi-GPU training is unsupported\n",
            "Fold 0\n",
            "Loading data\n",
            "Number of tasks = 1\n",
            "Splitting data with seed 0\n",
            "100% 2039/2039 [00:00<00:00, 3681.51it/s]\n",
            "Total scaffolds = 1,025 | train scaffolds = 764 | val scaffolds = 123 | test scaffolds = 138\n",
            "Label averages per scaffold, in decreasing order of scaffold frequency,capped at 10 scaffolds and 20 labels: [(array([0.72992701]), array([137])), (array([1.]), array([1])), (array([0.]), array([1])), (array([1.]), array([1])), (array([1.]), array([1])), (array([0.]), array([1])), (array([1.]), array([1])), (array([1.]), array([2])), (array([0.]), array([2])), (array([1.]), array([1]))]\n",
            "Class sizes\n",
            "p_np 0: 23.49%, 1: 76.51%\n",
            "Total size = 2,039 | train size = 1,631 | val size = 203 | test size = 205\n",
            "Loading model 0 from model/tryout/model.ep3\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Pretrained parameter \"av_task_atom.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_atom.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_bond.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_bond.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear_rev.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear_rev.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear_rev.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear_rev.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.readout.cached_zero_vector\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_atom.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_atom.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_bond.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_bond.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_atom.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_atom.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_bond.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_bond.bias\" cannot be found in model parameters.\n",
            "GroverFinetuneTask(\n",
            "  (grover): GROVEREmbedding(\n",
            "    (encoders): GTransEncoder(\n",
            "      (edge_blocks): ModuleList(\n",
            "        (0): MTBlock(\n",
            "          (heads): ModuleList(\n",
            "            (0): Head(\n",
            "              (mpn_q): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_k): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_v): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "            )\n",
            "          )\n",
            "          (act_func): PReLU(num_parameters=1)\n",
            "          (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "          (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (W_i): Linear(in_features=165, out_features=100, bias=False)\n",
            "          (attn): MultiHeadedAttention(\n",
            "            (linear_layers): ModuleList(\n",
            "              (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "            )\n",
            "            (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "            (attention): Attention()\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "          (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (sublayer): SublayerConnection(\n",
            "            (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "        )\n",
            "      )\n",
            "      (node_blocks): ModuleList(\n",
            "        (0): MTBlock(\n",
            "          (heads): ModuleList(\n",
            "            (0): Head(\n",
            "              (mpn_q): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_k): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_v): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "            )\n",
            "          )\n",
            "          (act_func): PReLU(num_parameters=1)\n",
            "          (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "          (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (W_i): Linear(in_features=151, out_features=100, bias=False)\n",
            "          (attn): MultiHeadedAttention(\n",
            "            (linear_layers): ModuleList(\n",
            "              (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "            )\n",
            "            (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "            (attention): Attention()\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "          (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (sublayer): SublayerConnection(\n",
            "            (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "        )\n",
            "      )\n",
            "      (ffn_atom_from_atom): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_atom_from_bond): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_bond_from_atom): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_bond_from_bond): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (atom_from_atom_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (atom_from_bond_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (bond_from_atom_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (bond_from_bond_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (act_func_node): PReLU(num_parameters=1)\n",
            "      (act_func_edge): PReLU(num_parameters=1)\n",
            "      (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "    )\n",
            "  )\n",
            "  (readout): Readout()\n",
            "  (mol_atom_from_atom_ffn): Sequential(\n",
            "    (0): Dropout(p=0.1, inplace=False)\n",
            "    (1): Linear(in_features=300, out_features=200, bias=True)\n",
            "    (2): PReLU(num_parameters=1)\n",
            "    (3): Dropout(p=0.1, inplace=False)\n",
            "    (4): Linear(in_features=200, out_features=1, bias=True)\n",
            "  )\n",
            "  (mol_atom_from_bond_ffn): Sequential(\n",
            "    (0): Dropout(p=0.1, inplace=False)\n",
            "    (1): Linear(in_features=300, out_features=200, bias=True)\n",
            "    (2): PReLU(num_parameters=1)\n",
            "    (3): Dropout(p=0.1, inplace=False)\n",
            "    (4): Linear(in_features=200, out_features=1, bias=True)\n",
            "  )\n",
            "  (sigmoid): Sigmoid()\n",
            ")\n",
            "Number of parameters = 889,418\n",
            "Moving model to cuda\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0000 loss_train: 1.027524 loss_val: 0.494863 auc_val: 0.8744 cur_lr: 0.00059 t_time: 5.5550s v_time: 0.7379s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0001 loss_train: 0.855072 loss_val: 0.488093 auc_val: 0.8805 cur_lr: 0.00098 t_time: 5.4703s v_time: 0.7435s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0002 loss_train: 0.802001 loss_val: 0.488020 auc_val: 0.8953 cur_lr: 0.00073 t_time: 5.5585s v_time: 0.7317s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0003 loss_train: 0.743282 loss_val: 0.483438 auc_val: 0.8804 cur_lr: 0.00055 t_time: 5.5933s v_time: 0.7305s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0004 loss_train: 0.705130 loss_val: 0.473394 auc_val: 0.9043 cur_lr: 0.00041 t_time: 5.5970s v_time: 0.7558s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0005 loss_train: 0.682583 loss_val: 0.473367 auc_val: 0.8962 cur_lr: 0.00030 t_time: 5.5740s v_time: 0.7244s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0006 loss_train: 0.659755 loss_val: 0.477886 auc_val: 0.8939 cur_lr: 0.00023 t_time: 5.5852s v_time: 0.7288s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0007 loss_train: 0.658016 loss_val: 0.476979 auc_val: 0.8923 cur_lr: 0.00017 t_time: 5.5050s v_time: 0.7280s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0008 loss_train: 0.647427 loss_val: 0.470443 auc_val: 0.9020 cur_lr: 0.00013 t_time: 5.5287s v_time: 0.7295s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0009 loss_train: 0.646125 loss_val: 0.474078 auc_val: 0.8938 cur_lr: 0.00010 t_time: 5.7616s v_time: 0.7285s\n",
            "Model 0 best validation auc = 0.904320 on epoch 4\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Loading pretrained parameter \"readout.cached_zero_vector\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.bias\".\n",
            "Moving model to cuda\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Model 0 test auc = 0.921247\n",
            "Ensemble test auc = 0.921247\n",
            "Fold 1\n",
            "Loading data\n",
            "Number of tasks = 1\n",
            "Splitting data with seed 1\n",
            "100% 2039/2039 [00:00<00:00, 3551.50it/s]\n",
            "Total scaffolds = 1,025 | train scaffolds = 768 | val scaffolds = 132 | test scaffolds = 125\n",
            "Label averages per scaffold, in decreasing order of scaffold frequency,capped at 10 scaffolds and 20 labels: [(array([0.72992701]), array([137])), (array([1.]), array([2])), (array([1.]), array([3])), (array([0.8]), array([5])), (array([1.]), array([9])), (array([1.]), array([1])), (array([1.]), array([1])), (array([1.]), array([1])), (array([1.]), array([1])), (array([1.]), array([1]))]\n",
            "Class sizes\n",
            "p_np 0: 23.49%, 1: 76.51%\n",
            "Total size = 2,039 | train size = 1,631 | val size = 203 | test size = 205\n",
            "Loading model 0 from model/tryout/model.ep3\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Pretrained parameter \"av_task_atom.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_atom.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_bond.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_bond.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear_rev.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear_rev.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear_rev.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear_rev.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.readout.cached_zero_vector\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_atom.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_atom.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_bond.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_bond.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_atom.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_atom.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_bond.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_bond.bias\" cannot be found in model parameters.\n",
            "GroverFinetuneTask(\n",
            "  (grover): GROVEREmbedding(\n",
            "    (encoders): GTransEncoder(\n",
            "      (edge_blocks): ModuleList(\n",
            "        (0): MTBlock(\n",
            "          (heads): ModuleList(\n",
            "            (0): Head(\n",
            "              (mpn_q): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_k): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_v): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "            )\n",
            "          )\n",
            "          (act_func): PReLU(num_parameters=1)\n",
            "          (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "          (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (W_i): Linear(in_features=165, out_features=100, bias=False)\n",
            "          (attn): MultiHeadedAttention(\n",
            "            (linear_layers): ModuleList(\n",
            "              (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "            )\n",
            "            (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "            (attention): Attention()\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "          (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (sublayer): SublayerConnection(\n",
            "            (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "        )\n",
            "      )\n",
            "      (node_blocks): ModuleList(\n",
            "        (0): MTBlock(\n",
            "          (heads): ModuleList(\n",
            "            (0): Head(\n",
            "              (mpn_q): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_k): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_v): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "            )\n",
            "          )\n",
            "          (act_func): PReLU(num_parameters=1)\n",
            "          (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "          (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (W_i): Linear(in_features=151, out_features=100, bias=False)\n",
            "          (attn): MultiHeadedAttention(\n",
            "            (linear_layers): ModuleList(\n",
            "              (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "            )\n",
            "            (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "            (attention): Attention()\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "          (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (sublayer): SublayerConnection(\n",
            "            (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "        )\n",
            "      )\n",
            "      (ffn_atom_from_atom): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_atom_from_bond): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_bond_from_atom): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_bond_from_bond): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (atom_from_atom_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (atom_from_bond_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (bond_from_atom_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (bond_from_bond_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (act_func_node): PReLU(num_parameters=1)\n",
            "      (act_func_edge): PReLU(num_parameters=1)\n",
            "      (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "    )\n",
            "  )\n",
            "  (readout): Readout()\n",
            "  (mol_atom_from_atom_ffn): Sequential(\n",
            "    (0): Dropout(p=0.1, inplace=False)\n",
            "    (1): Linear(in_features=300, out_features=200, bias=True)\n",
            "    (2): PReLU(num_parameters=1)\n",
            "    (3): Dropout(p=0.1, inplace=False)\n",
            "    (4): Linear(in_features=200, out_features=1, bias=True)\n",
            "  )\n",
            "  (mol_atom_from_bond_ffn): Sequential(\n",
            "    (0): Dropout(p=0.1, inplace=False)\n",
            "    (1): Linear(in_features=300, out_features=200, bias=True)\n",
            "    (2): PReLU(num_parameters=1)\n",
            "    (3): Dropout(p=0.1, inplace=False)\n",
            "    (4): Linear(in_features=200, out_features=1, bias=True)\n",
            "  )\n",
            "  (sigmoid): Sigmoid()\n",
            ")\n",
            "Number of parameters = 889,418\n",
            "Moving model to cuda\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0000 loss_train: 1.016377 loss_val: 0.492704 auc_val: 0.8791 cur_lr: 0.00059 t_time: 6.3182s v_time: 0.7794s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0001 loss_train: 0.822924 loss_val: 0.487600 auc_val: 0.8680 cur_lr: 0.00098 t_time: 5.5121s v_time: 0.7989s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0002 loss_train: 0.752341 loss_val: 0.470391 auc_val: 0.8893 cur_lr: 0.00073 t_time: 5.5443s v_time: 0.7647s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0003 loss_train: 0.709847 loss_val: 0.468552 auc_val: 0.8863 cur_lr: 0.00055 t_time: 5.6165s v_time: 0.8104s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0004 loss_train: 0.682037 loss_val: 0.463301 auc_val: 0.8895 cur_lr: 0.00041 t_time: 5.5689s v_time: 0.7795s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0005 loss_train: 0.659133 loss_val: 0.464382 auc_val: 0.8914 cur_lr: 0.00030 t_time: 5.5949s v_time: 0.8020s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0006 loss_train: 0.630823 loss_val: 0.463676 auc_val: 0.8871 cur_lr: 0.00023 t_time: 5.5311s v_time: 0.7548s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0007 loss_train: 0.613836 loss_val: 0.460376 auc_val: 0.8912 cur_lr: 0.00017 t_time: 5.5768s v_time: 0.7511s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0008 loss_train: 0.604636 loss_val: 0.464385 auc_val: 0.8900 cur_lr: 0.00013 t_time: 5.5764s v_time: 0.7848s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0009 loss_train: 0.600993 loss_val: 0.461464 auc_val: 0.8902 cur_lr: 0.00010 t_time: 5.6025s v_time: 0.7736s\n",
            "Model 0 best validation auc = 0.891352 on epoch 5\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Loading pretrained parameter \"readout.cached_zero_vector\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.bias\".\n",
            "Moving model to cuda\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Model 0 test auc = 0.920000\n",
            "Ensemble test auc = 0.920000\n",
            "Fold 2\n",
            "Loading data\n",
            "Number of tasks = 1\n",
            "Splitting data with seed 2\n",
            "100% 2039/2039 [00:00<00:00, 3569.05it/s]\n",
            "Total scaffolds = 1,025 | train scaffolds = 766 | val scaffolds = 125 | test scaffolds = 134\n",
            "Label averages per scaffold, in decreasing order of scaffold frequency,capped at 10 scaffolds and 20 labels: [(array([0.72992701]), array([137])), (array([1.]), array([1])), (array([1.]), array([1])), (array([1.]), array([1])), (array([0.]), array([1])), (array([1.]), array([1])), (array([1.]), array([1])), (array([1.]), array([1])), (array([0.]), array([5])), (array([1.]), array([1]))]\n",
            "Class sizes\n",
            "p_np 0: 23.49%, 1: 76.51%\n",
            "Total size = 2,039 | train size = 1,631 | val size = 203 | test size = 205\n",
            "Loading model 0 from model/tryout/model.ep3\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Pretrained parameter \"av_task_atom.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_atom.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_bond.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"av_task_bond.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear_rev.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_atom.linear_rev.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear_rev.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"bv_task_bond.linear_rev.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.readout.cached_zero_vector\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_atom.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_atom.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_bond.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_atom_from_bond.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_atom.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_atom.bias\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_bond.weight\" cannot be found in model parameters.\n",
            "Pretrained parameter \"fg_task_all.linear_bond_from_bond.bias\" cannot be found in model parameters.\n",
            "GroverFinetuneTask(\n",
            "  (grover): GROVEREmbedding(\n",
            "    (encoders): GTransEncoder(\n",
            "      (edge_blocks): ModuleList(\n",
            "        (0): MTBlock(\n",
            "          (heads): ModuleList(\n",
            "            (0): Head(\n",
            "              (mpn_q): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_k): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_v): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "            )\n",
            "          )\n",
            "          (act_func): PReLU(num_parameters=1)\n",
            "          (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "          (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (W_i): Linear(in_features=165, out_features=100, bias=False)\n",
            "          (attn): MultiHeadedAttention(\n",
            "            (linear_layers): ModuleList(\n",
            "              (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "            )\n",
            "            (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "            (attention): Attention()\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "          (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (sublayer): SublayerConnection(\n",
            "            (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "        )\n",
            "      )\n",
            "      (node_blocks): ModuleList(\n",
            "        (0): MTBlock(\n",
            "          (heads): ModuleList(\n",
            "            (0): Head(\n",
            "              (mpn_q): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_k): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "              (mpn_v): MPNEncoder(\n",
            "                (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "                (act_func): PReLU(num_parameters=1)\n",
            "                (W_h): Linear(in_features=100, out_features=100, bias=False)\n",
            "              )\n",
            "            )\n",
            "          )\n",
            "          (act_func): PReLU(num_parameters=1)\n",
            "          (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "          (layernorm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "          (W_i): Linear(in_features=151, out_features=100, bias=False)\n",
            "          (attn): MultiHeadedAttention(\n",
            "            (linear_layers): ModuleList(\n",
            "              (0): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (1): Linear(in_features=100, out_features=100, bias=True)\n",
            "              (2): Linear(in_features=100, out_features=100, bias=True)\n",
            "            )\n",
            "            (output_linear): Linear(in_features=100, out_features=100, bias=False)\n",
            "            (attention): Attention()\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "          (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
            "          (sublayer): SublayerConnection(\n",
            "            (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "            (dropout): Dropout(p=0.1, inplace=False)\n",
            "          )\n",
            "        )\n",
            "      )\n",
            "      (ffn_atom_from_atom): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_atom_from_bond): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=251, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_bond_from_atom): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (ffn_bond_from_bond): PositionwiseFeedForward(\n",
            "        (W_1): Linear(in_features=265, out_features=400, bias=True)\n",
            "        (W_2): Linear(in_features=400, out_features=100, bias=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "        (act_func): PReLU(num_parameters=1)\n",
            "      )\n",
            "      (atom_from_atom_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (atom_from_bond_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (bond_from_atom_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (bond_from_bond_sublayer): SublayerConnection(\n",
            "        (norm): LayerNorm((100,), eps=1e-05, elementwise_affine=True)\n",
            "        (dropout): Dropout(p=0.1, inplace=False)\n",
            "      )\n",
            "      (act_func_node): PReLU(num_parameters=1)\n",
            "      (act_func_edge): PReLU(num_parameters=1)\n",
            "      (dropout_layer): Dropout(p=0.1, inplace=False)\n",
            "    )\n",
            "  )\n",
            "  (readout): Readout()\n",
            "  (mol_atom_from_atom_ffn): Sequential(\n",
            "    (0): Dropout(p=0.1, inplace=False)\n",
            "    (1): Linear(in_features=300, out_features=200, bias=True)\n",
            "    (2): PReLU(num_parameters=1)\n",
            "    (3): Dropout(p=0.1, inplace=False)\n",
            "    (4): Linear(in_features=200, out_features=1, bias=True)\n",
            "  )\n",
            "  (mol_atom_from_bond_ffn): Sequential(\n",
            "    (0): Dropout(p=0.1, inplace=False)\n",
            "    (1): Linear(in_features=300, out_features=200, bias=True)\n",
            "    (2): PReLU(num_parameters=1)\n",
            "    (3): Dropout(p=0.1, inplace=False)\n",
            "    (4): Linear(in_features=200, out_features=1, bias=True)\n",
            "  )\n",
            "  (sigmoid): Sigmoid()\n",
            ")\n",
            "Number of parameters = 889,418\n",
            "Moving model to cuda\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0000 loss_train: 1.000364 loss_val: 0.507716 auc_val: 0.8434 cur_lr: 0.00059 t_time: 5.7976s v_time: 0.7802s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0001 loss_train: 0.824395 loss_val: 0.504539 auc_val: 0.8560 cur_lr: 0.00098 t_time: 5.5894s v_time: 0.7779s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0002 loss_train: 0.735137 loss_val: 0.493423 auc_val: 0.8539 cur_lr: 0.00073 t_time: 5.5191s v_time: 0.7610s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0003 loss_train: 0.687535 loss_val: 0.487282 auc_val: 0.8597 cur_lr: 0.00055 t_time: 5.5613s v_time: 0.7595s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0004 loss_train: 0.681197 loss_val: 0.489330 auc_val: 0.8702 cur_lr: 0.00041 t_time: 5.5513s v_time: 0.7501s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0005 loss_train: 0.647608 loss_val: 0.488870 auc_val: 0.8618 cur_lr: 0.00030 t_time: 5.6565s v_time: 0.7739s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0006 loss_train: 0.638494 loss_val: 0.488281 auc_val: 0.8729 cur_lr: 0.00023 t_time: 5.5400s v_time: 0.7584s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0007 loss_train: 0.626862 loss_val: 0.490144 auc_val: 0.8702 cur_lr: 0.00017 t_time: 5.6183s v_time: 0.7814s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0008 loss_train: 0.619776 loss_val: 0.484179 auc_val: 0.8782 cur_lr: 0.00013 t_time: 5.9662s v_time: 0.7596s\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 10 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Epoch: 0009 loss_train: 0.613262 loss_val: 0.486484 auc_val: 0.8789 cur_lr: 0.00010 t_time: 6.3030s v_time: 0.7931s\n",
            "Model 0 best validation auc = 0.878887 on epoch 9\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Loading pretrained parameter \"readout.cached_zero_vector\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.bias\".\n",
            "Moving model to cuda\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            "Model 0 test auc = 0.888635\n",
            "Ensemble test auc = 0.888635\n",
            "3-fold cross validation\n",
            "Seed 0 ==> test auc = 0.921247\n",
            "Seed 1 ==> test auc = 0.920000\n",
            "Seed 2 ==> test auc = 0.888635\n",
            "overall_scaffold_balanced_test_auc=0.909961\n",
            "std=0.015088\n"
          ]
        }
      ],
      "source": [
        "!python main.py finetune --data_path exampledata/finetune/bbbp.csv \\\n",
        "                        --features_path exampledata/finetune/bbbp.npz \\\n",
        "                        --save_dir model/finetune/bbbp/ \\\n",
        "                        --checkpoint_path model/tryout/model.ep3 \\\n",
        "                        --dataset_type classification \\\n",
        "                        --split_type scaffold_balanced \\\n",
        "                        --ensemble_size 1 \\\n",
        "                        --num_folds 3 \\\n",
        "                        --no_features_scaling \\\n",
        "                        --ffn_hidden_size 200 \\\n",
        "                        --batch_size 32 \\\n",
        "                        --epochs 10 \\\n",
        "                        --init_lr 0.00015"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZgrgiGhH4pDJ"
      },
      "source": [
        "# Predicting output\n",
        "\n",
        "## Extracting molecular features\n",
        "\n",
        "If the finetuned model uses the molecular feature as input, we need to generate the molecular feature for the target molecules as well."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "q1SjXioJpy6E",
        "outputId": "d3814fa5-3d02-45f1-d90f-d650ee048510"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:19] WARNING: not removing hydrogen atom without neighbors\n",
            "  0% 6/2039 [00:01<06:32,  5.18it/s][21:09:21] WARNING: not removing hydrogen atom without neighbors\n",
            "  2% 47/2039 [00:02<01:03, 31.22it/s][21:09:22] WARNING: not removing hydrogen atom without neighbors\n",
            "  4% 91/2039 [00:04<01:14, 26.15it/s][21:09:24] WARNING: not removing hydrogen atom without neighbors\n",
            "  6% 120/2039 [00:05<01:03, 30.16it/s][21:09:25] WARNING: not removing hydrogen atom without neighbors\n",
            "  8% 161/2039 [00:07<01:14, 25.26it/s][21:09:27] WARNING: not removing hydrogen atom without neighbors\n",
            "  8% 169/2039 [00:08<02:01, 15.42it/s][21:09:28] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:28] WARNING: not removing hydrogen atom without neighbors\n",
            " 10% 198/2039 [00:08<01:00, 30.28it/s][21:09:29] WARNING: not removing hydrogen atom without neighbors\n",
            " 13% 265/2039 [00:10<00:55, 32.07it/s][21:09:31] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:31] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:31] WARNING: not removing hydrogen atom without neighbors\n",
            " 15% 305/2039 [00:12<01:07, 25.66it/s][21:09:32] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:32] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:32] WARNING: not removing hydrogen atom without neighbors\n",
            " 20% 417/2039 [00:16<00:46, 35.03it/s][21:09:37] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:37] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:38] WARNING: not removing hydrogen atom without neighbors\n",
            " 23% 464/2039 [00:18<00:48, 32.41it/s][21:09:39] WARNING: not removing hydrogen atom without neighbors\n",
            " 26% 529/2039 [00:21<00:52, 28.80it/s][21:09:41] WARNING: not removing hydrogen atom without neighbors\n",
            " 27% 552/2039 [00:22<00:56, 26.22it/s][21:09:42] WARNING: not removing hydrogen atom without neighbors\n",
            " 28% 570/2039 [00:22<00:58, 25.26it/s][21:09:42] WARNING: not removing hydrogen atom without neighbors\n",
            " 29% 600/2039 [00:23<00:55, 25.71it/s][21:09:43] WARNING: not removing hydrogen atom without neighbors\n",
            " 30% 619/2039 [00:24<00:55, 25.71it/s][21:09:44] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:44] WARNING: not removing hydrogen atom without neighbors\n",
            " 32% 645/2039 [00:25<00:49, 28.14it/s][21:09:45] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:45] WARNING: not removing hydrogen atom without neighbors\n",
            " 34% 687/2039 [00:27<00:57, 23.64it/s][21:09:47] WARNING: not removing hydrogen atom without neighbors\n",
            " 35% 714/2039 [00:28<00:57, 23.01it/s][21:09:48] WARNING: not removing hydrogen atom without neighbors\n",
            " 39% 798/2039 [00:31<00:58, 21.09it/s][21:09:53] WARNING: not removing hydrogen atom without neighbors\n",
            " 40% 813/2039 [00:33<01:37, 12.61it/s][21:09:54] WARNING: not removing hydrogen atom without neighbors\n",
            " 44% 897/2039 [00:36<00:49, 23.10it/s][21:09:56] WARNING: not removing hydrogen atom without neighbors\n",
            " 44% 903/2039 [00:36<00:49, 22.76it/s][21:09:57] WARNING: not removing hydrogen atom without neighbors\n",
            " 45% 919/2039 [00:37<00:44, 25.44it/s][21:09:58] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:58] [21:09:58] WARNING: not removing hydrogen atom without neighbors\n",
            "WARNING: not removing hydrogen atom without neighbors\n",
            "[21:09:59] WARNING: not removing hydrogen atom without neighbors\n",
            " 50% 1013/2039 [00:41<00:33, 30.69it/s][21:10:01] WARNING: not removing hydrogen atom without neighbors\n",
            " 51% 1041/2039 [00:42<00:34, 29.09it/s][21:10:02] WARNING: not removing hydrogen atom without neighbors\n",
            " 56% 1135/2039 [00:45<00:36, 24.64it/s][21:10:05] WARNING: not removing hydrogen atom without neighbors\n",
            " 57% 1153/2039 [00:46<00:35, 24.77it/s][21:10:06] WARNING: not removing hydrogen atom without neighbors\n",
            " 57% 1161/2039 [00:46<00:28, 30.81it/s][21:10:07] WARNING: not removing hydrogen atom without neighbors\n",
            " 57% 1168/2039 [00:47<00:36, 23.87it/s][21:10:07] WARNING: not removing hydrogen atom without neighbors\n",
            " 58% 1186/2039 [00:47<00:25, 33.33it/s][21:10:07] WARNING: not removing hydrogen atom without neighbors\n",
            " 59% 1194/2039 [00:48<00:26, 32.07it/s][21:10:08] WARNING: not removing hydrogen atom without neighbors\n",
            " 61% 1235/2039 [00:49<00:38, 21.11it/s][21:10:10] WARNING: not removing hydrogen atom without neighbors\n",
            " 62% 1263/2039 [00:51<00:32, 24.19it/s][21:10:11] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:10:11] WARNING: not removing hydrogen atom without neighbors\n",
            " 62% 1268/2039 [00:51<00:32, 24.06it/s][21:10:11] WARNING: not removing hydrogen atom without neighbors\n",
            " 63% 1292/2039 [00:52<00:28, 26.37it/s][21:10:12] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:10:12] WARNING: not removing hydrogen atom without neighbors\n",
            " 64% 1296/2039 [00:52<00:30, 24.41it/s][21:10:12] WARNING: not removing hydrogen atom without neighbors\n",
            " 64% 1308/2039 [00:52<00:27, 26.72it/s][21:10:12] WARNING: not removing hydrogen atom without neighbors\n",
            " 65% 1318/2039 [00:53<00:31, 22.61it/s][21:10:13] WARNING: not removing hydrogen atom without neighbors\n",
            " 65% 1334/2039 [00:53<00:22, 31.22it/s][21:10:13] WARNING: not removing hydrogen atom without neighbors\n",
            " 67% 1356/2039 [00:54<00:36, 18.90it/s][21:10:14] WARNING: not removing hydrogen atom without neighbors\n",
            " 68% 1384/2039 [00:55<00:36, 17.78it/s][21:10:15] WARNING: not removing hydrogen atom without neighbors\n",
            " 71% 1442/2039 [00:57<00:27, 22.02it/s][21:10:18] WARNING: not removing hydrogen atom without neighbors\n",
            " 71% 1448/2039 [00:58<00:27, 21.47it/s][21:10:18] WARNING: not removing hydrogen atom without neighbors\n",
            " 72% 1465/2039 [00:58<00:19, 29.81it/s][21:10:19] WARNING: not removing hydrogen atom without neighbors\n",
            " 72% 1472/2039 [00:59<00:26, 21.40it/s][21:10:19] WARNING: not removing hydrogen atom without neighbors\n",
            " 77% 1567/2039 [01:02<00:22, 21.35it/s][21:10:22] WARNING: not removing hydrogen atom without neighbors\n",
            " 78% 1587/2039 [01:03<00:13, 33.18it/s][21:10:23] WARNING: not removing hydrogen atom without neighbors\n",
            " 78% 1591/2039 [01:03<00:20, 21.92it/s][21:10:23] WARNING: not removing hydrogen atom without neighbors\n",
            " 78% 1594/2039 [01:03<00:20, 21.43it/s]WARNING: not removing hydrogen atom without neighbors\n",
            " 79% 1616/2039 [01:04<00:15, 26.99it/s][21:10:24] WARNING: not removing hydrogen atom without neighbors\n",
            " 79% 1620/2039 [01:04<00:19, 21.72it/s][21:10:24] WARNING: not removing hydrogen atom without neighbors\n",
            " 81% 1659/2039 [01:06<00:16, 23.11it/s][21:10:26] WARNING: not removing hydrogen atom without neighbors\n",
            " 82% 1671/2039 [01:06<00:13, 28.20it/s][21:10:26] WARNING: not removing hydrogen atom without neighbors\n",
            " 83% 1686/2039 [01:07<00:14, 23.67it/s][21:10:27] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:10:27] WARNING: not removing hydrogen atom without neighbors\n",
            " 83% 1689/2039 [01:07<00:17, 19.68it/s][21:10:27] WARNING: not removing hydrogen atom without neighbors\n",
            " 84% 1712/2039 [01:08<00:14, 22.21it/s][21:10:28] WARNING: not removing hydrogen atom without neighbors\n",
            " 85% 1736/2039 [01:09<00:09, 31.18it/s][21:10:29] WARNING: not removing hydrogen atom without neighbors\n",
            " 87% 1766/2039 [01:10<00:08, 32.87it/s][21:10:30] WARNING: not removing hydrogen atom without neighbors\n",
            "[21:10:30] WARNING: not removing hydrogen atom without neighbors\n",
            " 88% 1797/2039 [01:11<00:08, 29.30it/s][21:10:31] WARNING: not removing hydrogen atom without neighbors\n",
            " 92% 1870/2039 [01:14<00:06, 26.35it/s][21:10:34] WARNING: not removing hydrogen atom without neighbors\n",
            " 93% 1896/2039 [01:15<00:05, 26.27it/s][21:10:35] WARNING: not removing hydrogen atom without neighbors\n",
            " 96% 1948/2039 [01:17<00:03, 27.22it/s][21:10:37] WARNING: not removing hydrogen atom without neighbors\n",
            " 97% 1971/2039 [01:18<00:02, 26.46it/s][21:10:38] WARNING: not removing hydrogen atom without neighbors\n",
            "100% 2039/2039 [01:20<00:00, 25.38it/s]\n"
          ]
        }
      ],
      "source": [
        "!python scripts/save_features.py --data_path exampledata/finetune/bbbp.csv \\\n",
        "                                --save_path exampledata/finetune/bbbp.npz \\\n",
        "                                --features_generator rdkit_2d_normalized \\\n",
        "                                --restart "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7uu-qkHJ4ygI"
      },
      "source": [
        "## Predicting output with the finetuned model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ZH6Q7l0bp_NW",
        "outputId": "c1cf8642-3759-4f50-bf90-6320f79b0ec3"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "WARNING:root:No normalization for BCUT2D_MWHI\n",
            "WARNING:root:No normalization for BCUT2D_MWLOW\n",
            "WARNING:root:No normalization for BCUT2D_CHGHI\n",
            "WARNING:root:No normalization for BCUT2D_CHGLO\n",
            "WARNING:root:No normalization for BCUT2D_LOGPHI\n",
            "WARNING:root:No normalization for BCUT2D_LOGPLOW\n",
            "WARNING:root:No normalization for BCUT2D_MRHI\n",
            "WARNING:root:No normalization for BCUT2D_MRLOW\n",
            "[WARNING] Horovod cannot be imported; multi-GPU training is unsupported\n",
            "Loading training args\n",
            "Loading data\n",
            "Validating SMILES\n",
            "Test size = 2,039\n",
            "Predicting...\n",
            "  0% 0/3 [00:00<?, ?it/s]Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Loading pretrained parameter \"readout.cached_zero_vector\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.bias\".\n",
            "Moving model to cuda\n",
            "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n",
            "  cpuset_checked))\n",
            " 33% 1/3 [00:08<00:17,  8.86s/it]Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Loading pretrained parameter \"readout.cached_zero_vector\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.bias\".\n",
            "Moving model to cuda\n",
            " 67% 2/3 [00:13<00:06,  6.60s/it]Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.edge_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_q.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_k.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.heads.0.mpn_v.W_h.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.layernorm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_i.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.0.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.linear_layers.2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.attn.output_linear.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.W_o.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.node_blocks.0.sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_atom_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_atom.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_1.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.W_2.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.ffn_bond_from_bond.act_func.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.atom_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_atom_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.bond_from_bond_sublayer.norm.bias\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_node.weight\".\n",
            "Loading pretrained parameter \"grover.encoders.act_func_edge.weight\".\n",
            "Loading pretrained parameter \"readout.cached_zero_vector\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_atom_ffn.4.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.1.bias\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.2.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.weight\".\n",
            "Loading pretrained parameter \"mol_atom_from_bond_ffn.4.bias\".\n",
            "Moving model to cuda\n",
            "100% 3/3 [00:18<00:00,  6.29s/it]\n",
            "Saving predictions to data_pre.csv\n"
          ]
        }
      ],
      "source": [
        "!python main.py predict --data_path exampledata/finetune/bbbp.csv \\\n",
        "               --features_path exampledata/finetune/bbbp.npz \\\n",
        "               --checkpoint_dir ./model \\\n",
        "               --no_features_scaling \\\n",
        "               --output data_pre.csv"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "y9FW9hHP-FCx"
      },
      "source": [
        "## Output\n",
        "\n",
        "The output will be saved in a file called `data_pre.csv`."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Congratulations! Time to join the Community!\n",
        "Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\n",
        "\n",
        "# **Star DeepChem on [Github](https://github.com/deepchem/deepchem)**\n",
        "This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\n",
        "\n",
        "# **Join the DeepChem Gitter**\n",
        "The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!\n"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "Introduction_to_GROVER",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
