{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4YiTAmb_u82D"
      },
      "source": [
        "# **Ferminet Tutorial**\n",
        "\n",
        "Author : Shaipranesh S : [Website](https://shaipranesh.vercel.app/)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e6vvi2nywcWj"
      },
      "source": [
        "\n",
        "**Background:**\n",
        "---\n",
        "  The electrons in a molecule are quantum mechanical in nature, meaning they do not follow classical physical laws. Quantum mechanics only gives a probability of where the electrons will be found and will not tell exactly where it can be found. This probability is given by the squared magnitude of a property of molecule system called the wavefunction and is different in each case of different molecule.\n",
        "\n",
        "For many purposes, the nuclei of the atoms in a molecule can be considered to be stationary, and then we solve to get the wavefunction of the electrons. The probabilites when modelled into 3 dimensional space, takes the shape of the orbitals. Like shown in these images below which is taken via electron microscope.\n",
        "\n",
        "![orbitals - Copy.png]()\n",
        "*(From the top left, oribtals in order - 2s, 2p(y),3p(y),2p(x),3d(z^2),3d(x^2-y^2))*\n",
        "\n",
        "\n",
        "Don't worry if you cannot remember or relate to the concept of orbitals, just remember that these are the space where electrons are found with more probability.\n",
        "\n",
        "Using these wavefunctions, the electronic structure(a model containing electrons at its most probable positions) of a system can be obtained which can be used to calculate the energy at ground state. This value, then can be used to calculate various properties like ionization energy, electron affinity, etc.\n",
        "\n",
        "The wavefunction of simple one electron systems like hydrogen atom, helium cation can be found easily, but for heavier atoms and molecules, electron-electron repulsion comes into act and makes it hard to compute the wavefunctions due to these interactions. Calculating these wavefunctions exactly will need a lot of computing resources and time which cannot be feasible to get them. Hence, other various different techniques for to approximate the wavefunction have been introduced, where there is a different tradeoff between speed and accuracy of the solution. One such method is the variational Monte Carlo which aims to include the effects of electron correlation in the solution without it.\n",
        "\n",
        "Since Deep learning act as universal function approximators, it can be used to approximate wavefunction as well!! One such approach is the DNN based Variational Monte Carlo called [Ferminet](https://arxiv.org/pdf/1909.02487.pdf). In this tutorial we will be looking into how to use Ferminet to find out the ionization potential of a molecule.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "V9WBYBYXLIfF"
      },
      "source": [
        "**Variational Monte Carlo**\n",
        "\n",
        "VMC is a Monte carlo based method based on the variational principle in quantum mechanics. The variational principle states that the expected value of the energy of a trial wave function is always greater than or equal to the ground state energy of the molecule system:-\n",
        "\n",
        " \t\t\t\t\t\t\tE0≤⟨ψ|H|ψ⟩.\n",
        "\n",
        "Hence the traditional VMC aims to sample the electron’s coordinates via Monte Carlo techniques and move towards such values which minimizes the expected energy of the trial wave function.\n",
        "\n",
        "Steps:-\n",
        "\n",
        "1. Begin by initializing both the energy and electron coordinate values. Commence the Monte Carlo calculation.\n",
        "\n",
        "\n",
        "2. Compute a trial position.\n",
        "\n",
        "\n",
        "3. Employ the Metropolis algorithm to determine whether to accept or reject the proposed new move.\n",
        "\n",
        "\n",
        "4. If the step is accepted, update the parameters.\n",
        "Update running averages.\n",
        "\n",
        "\n",
        "5. Conclude the calculation and compute the final averages after a set number of steps.\n",
        "\n",
        "\n",
        "By this method, we can approximate the ground state energies of molecular systems. However the results heavily depend on the quality of the trial wavefunction, and the optimization of these functions is an area of active research.\n",
        "Deep learning techniques can help to approximate and optimize the trial wave functions with physics-based elements built into the ansatz, and the cost as the expected energy function.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pc9_KOVCvA_n"
      },
      "source": [
        "# **Colab**\n",
        "This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.\n",
        "\n",
        "\n",
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/FermiNet_DeepChem_tutorial.ipynb)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dvQS_xWBz5Ef"
      },
      "source": [
        "**Ferminet Architecture details overview**:-\n",
        "\n",
        "Ferminet has “L” number of 1-electron and 2-electron feature layers which acts as the embedding layer for electron distance features.\n",
        "It models the ansatz as an “envelope layer” which takes in the calculated 1-electron features from the embedding layers as the input. The basic idea of the envelope is to enforce the condition that the wave function reaches close to 0 as the distance of the electron tends to infinity.\n",
        "\n",
        "Ferminet expresses the wave function as a Slater type orbital where the multi-electron wavefunctions are the envelope functions calculated. The determinant product of all the orbitals gives the trial-wavefunction sampled.\n",
        "\n",
        "For  more information on our implementation of FermiNet and the new heurestics for ions as input, checkout out our paper: https://arxiv.org/abs/2401.10287"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UX8jrxDnL9KG"
      },
      "source": [
        "**Ferminet training:**\n",
        "\n",
        "\n",
        "Ferminet has 2 parts to the training - supervised pre-training and unsupervised ansatz optimization w.r.t expected energy as the cost function.\n",
        "In the pretraining, orbital values calculated from Hartree-Fock method are used as the label which the ferminet’s orbital values will match up to. Here the loss is simply the Mean squared error between the ferminet and Hartree Fock orbital values. By this method, we train the ferminet to learn an ansatz which will match Hartree-Fock baseline.\n",
        "Then in the actual training, the model is tuned in accordance with expected energy cost function, and the model actually learns to better include the electron interactions which were missing in the original Hartree-Fock solution, thereby increasing the quality of the trial-wave function and sampling better electrons that reduce the expected energy more, simultaneously.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "In this tutorial we will look into calculating the ground state energy of the H2 molecule using the FermiNet model implementation in Deepchem."
      ],
      "metadata": {
        "id": "RAH25CkzNo9H"
      }
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "5d64a2pTvEXg",
        "outputId": "b0337e4d-b05d-4cd1-a25a-b870607f0993"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Collecting deepchem\n",
            "  Downloading deepchem-2.7.2.dev20240203232027-py3-none-any.whl (999 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m999.5/999.5 kB\u001b[0m \u001b[31m9.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: joblib in /usr/local/lib/python3.10/dist-packages (from deepchem) (1.3.2)\n",
            "Requirement already satisfied: numpy>=1.21 in /usr/local/lib/python3.10/dist-packages (from deepchem) (1.23.5)\n",
            "Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from deepchem) (1.5.3)\n",
            "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from deepchem) (1.2.2)\n",
            "Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from deepchem) (1.12)\n",
            "Requirement already satisfied: scipy>=1.10.1 in /usr/local/lib/python3.10/dist-packages (from deepchem) (1.11.4)\n",
            "Collecting rdkit (from deepchem)\n",
            "  Downloading rdkit-2023.9.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.4 MB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m34.4/34.4 MB\u001b[0m \u001b[31m35.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->deepchem) (2.8.2)\n",
            "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->deepchem) (2023.4)\n",
            "Requirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from rdkit->deepchem) (9.4.0)\n",
            "Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->deepchem) (3.2.0)\n",
            "Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->deepchem) (1.3.0)\n",
            "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->deepchem) (1.16.0)\n",
            "Installing collected packages: rdkit, deepchem\n",
            "Successfully installed deepchem-2.7.2.dev20240203232027 rdkit-2023.9.4\n"
          ]
        }
      ],
      "source": [
        "!pip install --pre deepchem"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "gzAQbeuAynaU",
        "outputId": "154bf106-5aa9-47ed-8f11-097cb4db8692"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Collecting torch_geometric\n",
            "  Downloading torch_geometric-2.4.0-py3-none-any.whl (1.0 MB)\n",
            "\u001b[?25l     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/1.0 MB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K     \u001b[91m━━━━━━━━━\u001b[0m\u001b[91m╸\u001b[0m\u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.3/1.0 MB\u001b[0m \u001b[31m7.4 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K     \u001b[91m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[91m╸\u001b[0m\u001b[90m━━━━━━━━━\u001b[0m \u001b[32m0.8/1.0 MB\u001b[0m \u001b[31m11.3 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m10.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (4.66.1)\n",
            "Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (1.23.5)\n",
            "Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (1.11.4)\n",
            "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (3.1.3)\n",
            "Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (2.31.0)\n",
            "Requirement already satisfied: pyparsing in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (3.1.1)\n",
            "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (1.2.2)\n",
            "Requirement already satisfied: psutil>=5.8.0 in /usr/local/lib/python3.10/dist-packages (from torch_geometric) (5.9.5)\n",
            "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch_geometric) (2.1.4)\n",
            "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->torch_geometric) (3.3.2)\n",
            "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->torch_geometric) (3.6)\n",
            "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->torch_geometric) (2.0.7)\n",
            "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->torch_geometric) (2023.11.17)\n",
            "Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->torch_geometric) (1.3.2)\n",
            "Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->torch_geometric) (3.2.0)\n",
            "Installing collected packages: torch_geometric\n",
            "Successfully installed torch_geometric-2.4.0\n"
          ]
        }
      ],
      "source": [
        "!pip install torch_geometric"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Hbuzr-8a1wNG",
        "outputId": "2b144917-b6c5-4248-8ace-f1b37e3970ea"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Collecting pyscf\n",
            "  Downloading pyscf-2.4.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (47.3 MB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m47.3/47.3 MB\u001b[0m \u001b[31m16.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: numpy!=1.16,!=1.17,>=1.13 in /usr/local/lib/python3.10/dist-packages (from pyscf) (1.23.5)\n",
            "Requirement already satisfied: scipy!=1.5.0,!=1.5.1 in /usr/local/lib/python3.10/dist-packages (from pyscf) (1.11.4)\n",
            "Requirement already satisfied: h5py>=2.7 in /usr/local/lib/python3.10/dist-packages (from pyscf) (3.9.0)\n",
            "Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from pyscf) (67.7.2)\n",
            "Installing collected packages: pyscf\n",
            "Successfully installed pyscf-2.4.0\n"
          ]
        }
      ],
      "source": [
        "!pip install pyscf"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "#  Initialization & Pretraining of FermiNet\n",
        "\n",
        "FermiNet requires the only input of a list of lists containing the nucleon's coordinates.\n",
        "\n",
        "The format is `[['Element Symbol',[3D coordinates]],[...],...]`"
      ],
      "metadata": {
        "id": "H5FD4KstniOz"
      }
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "CqzTFMLb5sSX"
      },
      "outputs": [],
      "source": [
        "from deepchem.models.torch_models.ferminet import FerminetModel\n",
        "import torch"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ganhirP5PmjP",
        "outputId": "db57e9ef-50b7-47df-9c6e-97ef6ffc201d"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "converged SCF energy = -1.11628637177674  <S^2> = 2.220446e-16  2S+1 = 1\n",
            " ** Mulliken pop alpha/beta on meta-lowdin orthogonal AOs **\n",
            " ** Mulliken pop       alpha | beta **\n",
            "pop of  0 H 1s        0.50000 | 0.50000   \n",
            "pop of  1 H 1s        0.50000 | 0.50000   \n",
            "In total             1.00000 | 1.00000   \n",
            " ** Mulliken atomic charges   ( Nelec_alpha | Nelec_beta ) **\n",
            "charge of  0H =      0.00000  (     0.50000      0.50000 )\n",
            "charge of  1H =      0.00000  (     0.50000      0.50000 )\n",
            "converged SCF energy = -1.11628637177674  <S^2> = 2.220446e-16  2S+1 = 1\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "/usr/local/lib/python3.10/dist-packages/deepchem/models/torch_models/layers.py:5657: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.\n",
            "  self.wdet.append(torch.nn.init.normal(torch.zeros(1)).squeeze(0))\n",
            "/usr/local/lib/python3.10/dist-packages/deepchem/models/torch_models/layers.py:5660: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.\n",
            "  (torch.nn.init.normal(torch.zeros(n_one[-1], 1),) /\n",
            "/usr/local/lib/python3.10/dist-packages/deepchem/models/torch_models/layers.py:5663: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.\n",
            "  (torch.nn.init.normal(torch.zeros(1))).squeeze(0))\n",
            "/usr/local/lib/python3.10/dist-packages/pyscf/gto/mole.py:1280: UserWarning: Function mol.dumps drops attribute spin because it is not JSON-serializable\n",
            "  warnings.warn(msg)\n",
            "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py:535: UserWarning: Using a target size (torch.Size([4, 1, 1, 1])) that is different to the input size (torch.Size([4, 16, 1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.\n",
            "  return F.mse_loss(input, target, reduction=self.reduction)\n",
            "/usr/local/lib/python3.10/dist-packages/deepchem/models/torch_models/ferminet.py:516: RuntimeWarning: divide by zero encountered in log\n",
            "  return 2 * np.log(np.abs(np_output))\n",
            "/usr/local/lib/python3.10/dist-packages/deepchem/utils/electron_sampler.py:252: RuntimeWarning: invalid value encountered in subtract\n",
            "  ratio = lp2 - lp1\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "The loss for the pretraining iteration 0 is 0.02710416354238987\n",
            "The loss for the pretraining iteration 1 is 0.02550419606268406\n",
            "The loss for the pretraining iteration 2 is 0.014827233739197254\n",
            "The loss for the pretraining iteration 3 is 0.010277194902300835\n",
            "The loss for the pretraining iteration 4 is 0.012760447338223457\n",
            "The loss for the pretraining iteration 5 is 0.008348378352820873\n",
            "The loss for the pretraining iteration 6 is 0.004029400181025267\n",
            "The loss for the pretraining iteration 7 is 0.004069587681442499\n",
            "The loss for the pretraining iteration 8 is 0.006711069494485855\n",
            "The loss for the pretraining iteration 9 is 0.006214166060090065\n",
            "The loss for the pretraining iteration 10 is 0.0034270230680704117\n",
            "The loss for the pretraining iteration 11 is 0.0018716256599873304\n",
            "The loss for the pretraining iteration 12 is 0.0025977646000683308\n",
            "The loss for the pretraining iteration 13 is 0.0038076299242675304\n",
            "The loss for the pretraining iteration 14 is 0.0035712781827896833\n",
            "The loss for the pretraining iteration 15 is 0.0021383825223892927\n",
            "The loss for the pretraining iteration 16 is 0.001360538648441434\n",
            "The loss for the pretraining iteration 17 is 0.0018277096096426249\n",
            "The loss for the pretraining iteration 18 is 0.002370834816247225\n",
            "The loss for the pretraining iteration 19 is 0.0023357041645795107\n",
            "The loss for the pretraining iteration 20 is 0.0016357541317120194\n",
            "The loss for the pretraining iteration 21 is 0.0011592376977205276\n",
            "The loss for the pretraining iteration 22 is 0.0010732874507084489\n",
            "The loss for the pretraining iteration 23 is 0.00105482735671103\n",
            "The loss for the pretraining iteration 24 is 0.0014446597779169679\n",
            "The loss for the pretraining iteration 25 is 0.0015389187028631568\n",
            "The loss for the pretraining iteration 26 is 0.0010128382127732038\n",
            "The loss for the pretraining iteration 27 is 0.0005692644044756889\n",
            "The loss for the pretraining iteration 28 is 0.0008503105491399765\n",
            "The loss for the pretraining iteration 29 is 0.001231641392223537\n",
            "The loss for the pretraining iteration 30 is 0.0009319973178207874\n",
            "The loss for the pretraining iteration 31 is 0.0008413218893110752\n",
            "The loss for the pretraining iteration 32 is 0.0007483096560463309\n",
            "The loss for the pretraining iteration 33 is 0.0005510839982889593\n",
            "The loss for the pretraining iteration 34 is 0.000656743417493999\n",
            "The loss for the pretraining iteration 35 is 0.0007715957472100854\n",
            "The loss for the pretraining iteration 36 is 0.0005374419270083308\n",
            "The loss for the pretraining iteration 37 is 0.0003938480804208666\n",
            "The loss for the pretraining iteration 38 is 0.000548980722669512\n",
            "The loss for the pretraining iteration 39 is 0.00048400662490166724\n",
            "The loss for the pretraining iteration 40 is 0.00038873759331181645\n",
            "The loss for the pretraining iteration 41 is 0.00045097796828486025\n",
            "The loss for the pretraining iteration 42 is 0.0003217317280359566\n",
            "The loss for the pretraining iteration 43 is 0.0003061808238271624\n",
            "The loss for the pretraining iteration 44 is 0.00040030136005952954\n",
            "The loss for the pretraining iteration 45 is 0.00026911977329291403\n",
            "The loss for the pretraining iteration 46 is 0.0002281282504554838\n",
            "The loss for the pretraining iteration 47 is 0.00028802669839933515\n",
            "The loss for the pretraining iteration 48 is 0.00024558795848861337\n",
            "The loss for the pretraining iteration 49 is 0.0002372544986428693\n",
            "The loss for the pretraining iteration 50 is 0.00019725140009541065\n",
            "The loss for the pretraining iteration 51 is 0.0001587205333635211\n",
            "The loss for the pretraining iteration 52 is 0.0001574101479491219\n",
            "The loss for the pretraining iteration 53 is 0.00013751318329013884\n",
            "The loss for the pretraining iteration 54 is 0.00011399536015233025\n",
            "The loss for the pretraining iteration 55 is 9.619578486308455e-05\n",
            "The loss for the pretraining iteration 56 is 0.00010916237806668505\n",
            "The loss for the pretraining iteration 57 is 0.00012222137593198568\n",
            "The loss for the pretraining iteration 58 is 0.000119653414003551\n",
            "The loss for the pretraining iteration 59 is 0.00012534541019704193\n",
            "The loss for the pretraining iteration 60 is 0.0001249718334292993\n",
            "The loss for the pretraining iteration 61 is 0.00012150041584391147\n",
            "The loss for the pretraining iteration 62 is 0.00014419096987694502\n",
            "The loss for the pretraining iteration 63 is 0.00013020743790548295\n",
            "The loss for the pretraining iteration 64 is 0.00012055526167387143\n",
            "The loss for the pretraining iteration 65 is 0.00011795425962191075\n",
            "The loss for the pretraining iteration 66 is 0.00011245403584325686\n",
            "The loss for the pretraining iteration 67 is 0.00011248439841438085\n",
            "The loss for the pretraining iteration 68 is 0.00010244370059808716\n",
            "The loss for the pretraining iteration 69 is 9.188290277961642e-05\n",
            "The loss for the pretraining iteration 70 is 8.389401773456484e-05\n",
            "The loss for the pretraining iteration 71 is 7.961162918945774e-05\n",
            "The loss for the pretraining iteration 72 is 8.842563693178818e-05\n",
            "The loss for the pretraining iteration 73 is 8.383024396607652e-05\n",
            "The loss for the pretraining iteration 74 is 7.287398329935968e-05\n",
            "The loss for the pretraining iteration 75 is 6.263024260988459e-05\n",
            "The loss for the pretraining iteration 76 is 7.332944369409233e-05\n",
            "The loss for the pretraining iteration 77 is 6.392243813024834e-05\n",
            "The loss for the pretraining iteration 78 is 6.126626249169931e-05\n",
            "The loss for the pretraining iteration 79 is 7.243566506076604e-05\n",
            "The loss for the pretraining iteration 80 is 8.115632954286411e-05\n",
            "The loss for the pretraining iteration 81 is 9.462016896577552e-05\n",
            "The loss for the pretraining iteration 82 is 8.256117871496826e-05\n",
            "The loss for the pretraining iteration 83 is 8.43695888761431e-05\n",
            "The loss for the pretraining iteration 84 is 8.064397115958855e-05\n",
            "The loss for the pretraining iteration 85 is 8.412772149313241e-05\n",
            "The loss for the pretraining iteration 86 is 7.916030881460756e-05\n",
            "The loss for the pretraining iteration 87 is 7.245060987770557e-05\n",
            "The loss for the pretraining iteration 88 is 7.072711741784588e-05\n",
            "The loss for the pretraining iteration 89 is 7.030618871795014e-05\n",
            "The loss for the pretraining iteration 90 is 6.857124390080571e-05\n",
            "The loss for the pretraining iteration 91 is 6.016474071657285e-05\n",
            "The loss for the pretraining iteration 92 is 6.819269765401259e-05\n",
            "The loss for the pretraining iteration 93 is 5.848338332725689e-05\n",
            "The loss for the pretraining iteration 94 is 5.700012479792349e-05\n",
            "The loss for the pretraining iteration 95 is 7.55156870582141e-05\n",
            "The loss for the pretraining iteration 96 is 8.392085874220356e-05\n",
            "The loss for the pretraining iteration 97 is 7.021804776741192e-05\n",
            "The loss for the pretraining iteration 98 is 7.385178469121456e-05\n",
            "The loss for the pretraining iteration 99 is 6.332934572128579e-05\n",
            "The loss for the pretraining iteration 100 is 6.405771273421124e-05\n",
            "The loss for the pretraining iteration 101 is 5.121116555528715e-05\n",
            "The loss for the pretraining iteration 102 is 4.800366150448099e-05\n",
            "The loss for the pretraining iteration 103 is 4.726020051748492e-05\n",
            "The loss for the pretraining iteration 104 is 4.51159939984791e-05\n",
            "The loss for the pretraining iteration 105 is 8.514018554706126e-05\n",
            "The loss for the pretraining iteration 106 is 8.576172695029527e-05\n",
            "The loss for the pretraining iteration 107 is 0.00011253333650529385\n",
            "The loss for the pretraining iteration 108 is 0.00012075374979758635\n",
            "The loss for the pretraining iteration 109 is 0.00017231094534508884\n",
            "The loss for the pretraining iteration 110 is 0.00018021675350610167\n",
            "The loss for the pretraining iteration 111 is 0.00016529613640159369\n",
            "The loss for the pretraining iteration 112 is 0.00011768848344217986\n",
            "The loss for the pretraining iteration 113 is 0.00012986203364562243\n",
            "The loss for the pretraining iteration 114 is 0.0001382317568641156\n",
            "The loss for the pretraining iteration 115 is 0.00015827579773031175\n",
            "The loss for the pretraining iteration 116 is 0.0001278908021049574\n",
            "The loss for the pretraining iteration 117 is 0.0001451588759664446\n",
            "The loss for the pretraining iteration 118 is 0.00013769550423603505\n",
            "The loss for the pretraining iteration 119 is 0.0001438808540115133\n",
            "The loss for the pretraining iteration 120 is 0.00015617000462953\n",
            "The loss for the pretraining iteration 121 is 0.00012382343993522227\n",
            "The loss for the pretraining iteration 122 is 0.00014192094386089593\n",
            "The loss for the pretraining iteration 123 is 0.00014249228115659207\n",
            "The loss for the pretraining iteration 124 is 0.00014419082435779274\n",
            "The loss for the pretraining iteration 125 is 0.00012240225623827428\n",
            "The loss for the pretraining iteration 126 is 0.0001294954854529351\n",
            "The loss for the pretraining iteration 127 is 0.0001253262162208557\n",
            "The loss for the pretraining iteration 128 is 0.00014096128870733082\n",
            "The loss for the pretraining iteration 129 is 0.0001674834784353152\n",
            "The loss for the pretraining iteration 130 is 0.00024367403239011765\n",
            "The loss for the pretraining iteration 131 is 0.00026486467686481774\n",
            "The loss for the pretraining iteration 132 is 0.00024559051962569356\n",
            "The loss for the pretraining iteration 133 is 0.00023632289958186448\n",
            "The loss for the pretraining iteration 134 is 0.00020681091700680554\n",
            "The loss for the pretraining iteration 135 is 0.00027475703973323107\n",
            "The loss for the pretraining iteration 136 is 0.0002840908127836883\n",
            "The loss for the pretraining iteration 137 is 0.00028007503715343773\n",
            "The loss for the pretraining iteration 138 is 0.00026251544477418065\n",
            "The loss for the pretraining iteration 139 is 0.00025227729929611087\n",
            "The loss for the pretraining iteration 140 is 0.00017289069364778697\n",
            "The loss for the pretraining iteration 141 is 0.00017096629017032683\n",
            "The loss for the pretraining iteration 142 is 0.00015920297300908715\n",
            "The loss for the pretraining iteration 143 is 0.0001496883196523413\n",
            "The loss for the pretraining iteration 144 is 0.0001929181453306228\n",
            "The loss for the pretraining iteration 145 is 0.00019793790124822408\n",
            "The loss for the pretraining iteration 146 is 0.0001737314014462754\n",
            "The loss for the pretraining iteration 147 is 0.00021336138888727874\n",
            "The loss for the pretraining iteration 148 is 0.00027954630786553025\n",
            "The loss for the pretraining iteration 149 is 0.00033040952985174954\n"
          ]
        }
      ],
      "source": [
        "import logging\n",
        "logger = logging.getLogger()\n",
        "logger.setLevel(logging.DEBUG)\n",
        "\n",
        "# Initializing the H2 molecule's coordinates\n",
        "H2_molecule = [['H', [0, 0, 0]],['H', [0, 0, 1.4135151]]]\n",
        "# Initializing the FermiNet model (spin:0, charge: 0,, batch:4 - more the number of batches accurate the solution gets)\n",
        "mol = FerminetModel(H2_molecule, spin=0, ion_charge=0, batch_no=4)\n",
        "# pretraining of the model initially\n",
        "mol.train(nb_epoch=150)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Now, let's try to calculate and print the enrgy after pretraining manually by calling the functions in the model and using the result from passing the input to the network."
      ],
      "metadata": {
        "id": "OMWHo_TlInal"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "mol.model.forward(torch.from_numpy(mol.molecule.x))\n",
        "energy = mol.model.calculate_electron_electron(\n",
        ") - mol.model.calculate_electron_nuclear(\n",
        ") + mol.model.nuclear_nuclear_potential + mol.model.calculate_kinetic_energy(\n",
        ")\n",
        "mean_energy = torch.mean(energy)\n",
        "print(\"the energy before start\")\n",
        "print(mean_energy)"
      ],
      "metadata": {
        "id": "Go4orRTrHBlD",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "8282f984-e412-4f8d-92dc-01fdcf6cf6e2"
      },
      "execution_count": 25,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "the energy before start\n",
            "tensor(-1.2601, dtype=torch.float64)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "You can see that the value closely matches that of the HF ground state energy after pretrain, as you can see in the output logs of the pretraining. This sets the model in a better starting postition than from starting to traing from scratch."
      ],
      "metadata": {
        "id": "krHQtkKAKoUw"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Training of FermiNet\n",
        "\n",
        "Before the actual training begins, we will have to call the 'prepare_train' function for the MCMC burn-in and to re-initialize the electron's positions. Then we call in the train function to begine the training of FermiNet."
      ],
      "metadata": {
        "id": "m1XPrAImLQIs"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import logging\n",
        "logger = logging.getLogger()\n",
        "logger.setLevel(logging.DEBUG)\n",
        "\n",
        "mol.prepare_train()\n",
        "mol.train(nb_epoch=100, lr=0.0002, std=0.04)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "BAXwwNzvKGNO",
        "outputId": "d6795331-950d-4cc0-917d-a2187d25315c"
      },
      "execution_count": 26,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "The mean energy for the training iteration 0 is -1.2023134152304877\n",
            "The mean energy for the training iteration 1 is -1.2212279380372484\n",
            "The mean energy for the training iteration 2 is -1.22077931653164\n",
            "The mean energy for the training iteration 3 is -1.214419004496566\n",
            "The mean energy for the training iteration 4 is -1.174576947039055\n",
            "The mean energy for the training iteration 5 is -1.1390901910542182\n",
            "The mean energy for the training iteration 6 is -1.0899969485517154\n",
            "The mean energy for the training iteration 7 is -1.0565436788181055\n",
            "The mean energy for the training iteration 8 is -1.064048258001718\n",
            "The mean energy for the training iteration 9 is -1.0608130317570892\n",
            "The mean energy for the training iteration 10 is -1.1026921930114093\n",
            "The mean energy for the training iteration 11 is -1.066838369414313\n",
            "The mean energy for the training iteration 12 is -1.0981637181819963\n",
            "The mean energy for the training iteration 13 is -1.208385874402916\n",
            "The mean energy for the training iteration 14 is -1.262695830608003\n",
            "The mean energy for the training iteration 15 is -1.0776739485752698\n",
            "The mean energy for the training iteration 16 is -1.0480650889772192\n",
            "The mean energy for the training iteration 17 is -1.099731226839941\n",
            "The mean energy for the training iteration 18 is -1.1020925107429984\n",
            "The mean energy for the training iteration 19 is -1.0602907990536032\n",
            "The mean energy for the training iteration 20 is -0.9566037929019523\n",
            "The mean energy for the training iteration 21 is -0.9902187160565474\n",
            "The mean energy for the training iteration 22 is -1.1216525050948858\n",
            "The mean energy for the training iteration 23 is -1.3102285531306923\n",
            "The mean energy for the training iteration 24 is -1.1289260924906317\n",
            "The mean energy for the training iteration 25 is -1.1545930458513745\n",
            "The mean energy for the training iteration 26 is -1.085439514038623\n",
            "The mean energy for the training iteration 27 is -1.2030956851765415\n",
            "The mean energy for the training iteration 28 is -1.1962403140378088\n",
            "The mean energy for the training iteration 29 is -1.1727665975271335\n",
            "The mean energy for the training iteration 30 is -1.1728282634455278\n",
            "The mean energy for the training iteration 31 is -1.2340020843253603\n",
            "The mean energy for the training iteration 32 is -1.2723080264500728\n",
            "The mean energy for the training iteration 33 is -1.1829203288088372\n",
            "The mean energy for the training iteration 34 is -1.1556855209825894\n",
            "The mean energy for the training iteration 35 is -1.1667598485140205\n",
            "The mean energy for the training iteration 36 is -1.1076713600180712\n",
            "The mean energy for the training iteration 37 is -1.070013376715587\n",
            "The mean energy for the training iteration 38 is -1.0915343879799217\n",
            "The mean energy for the training iteration 39 is -1.100337966650541\n",
            "The mean energy for the training iteration 40 is -1.077128274244595\n",
            "The mean energy for the training iteration 41 is -1.0911279301396857\n",
            "The mean energy for the training iteration 42 is -1.0795437477185272\n",
            "The mean energy for the training iteration 43 is -0.9939171985437859\n",
            "The mean energy for the training iteration 44 is -0.961124177353687\n",
            "The mean energy for the training iteration 45 is -1.003981840638917\n",
            "The mean energy for the training iteration 46 is -1.0184812943764725\n",
            "The mean energy for the training iteration 47 is -1.0478573206591946\n",
            "The mean energy for the training iteration 48 is -1.0500256503962087\n",
            "The mean energy for the training iteration 49 is -1.0841515217318658\n",
            "The mean energy for the training iteration 50 is -1.0957687584756604\n",
            "The mean energy for the training iteration 51 is -1.1256449471515435\n",
            "The mean energy for the training iteration 52 is -1.117174204903339\n",
            "The mean energy for the training iteration 53 is -1.1383184228748677\n",
            "The mean energy for the training iteration 54 is -1.1357051715211985\n",
            "The mean energy for the training iteration 55 is -1.1516425151284557\n",
            "The mean energy for the training iteration 56 is -1.1474583144326511\n",
            "The mean energy for the training iteration 57 is -1.1737528268563848\n",
            "The mean energy for the training iteration 58 is -1.192580238177998\n",
            "The mean energy for the training iteration 59 is -1.2092762589236268\n",
            "The mean energy for the training iteration 60 is -1.2034553297809725\n",
            "The mean energy for the training iteration 61 is -1.2136539109801798\n",
            "The mean energy for the training iteration 62 is -1.219436813088698\n",
            "The mean energy for the training iteration 63 is -1.251482360406399\n",
            "The mean energy for the training iteration 64 is -1.3380186001649237\n",
            "The mean energy for the training iteration 65 is -1.343535572481982\n",
            "The mean energy for the training iteration 66 is -1.458903390823884\n",
            "The mean energy for the training iteration 67 is -1.452288105106742\n",
            "The mean energy for the training iteration 68 is -1.3412629040407416\n",
            "The mean energy for the training iteration 69 is -1.2484572949534931\n",
            "The mean energy for the training iteration 70 is -1.1932754129283258\n",
            "The mean energy for the training iteration 71 is -1.0801422915949135\n",
            "The mean energy for the training iteration 72 is -1.035673991112145\n",
            "The mean energy for the training iteration 73 is -1.0865590297341998\n",
            "The mean energy for the training iteration 74 is -1.0355462663016874\n",
            "The mean energy for the training iteration 75 is -0.9428634016909109\n",
            "The mean energy for the training iteration 76 is -0.8901228445438876\n",
            "The mean energy for the training iteration 77 is -1.0595054843866776\n",
            "The mean energy for the training iteration 78 is -1.141316975690703\n",
            "The mean energy for the training iteration 79 is -0.9814779921006237\n",
            "The mean energy for the training iteration 80 is -0.9543074035004614\n",
            "The mean energy for the training iteration 81 is -1.000271851278307\n",
            "The mean energy for the training iteration 82 is -0.9922986098943067\n",
            "The mean energy for the training iteration 83 is -0.980377129710818\n",
            "The mean energy for the training iteration 84 is -0.9546578237472371\n",
            "The mean energy for the training iteration 85 is -0.9753259735500012\n",
            "The mean energy for the training iteration 86 is -0.9808058184182848\n",
            "The mean energy for the training iteration 87 is -0.9869200051528082\n",
            "The mean energy for the training iteration 88 is -0.9899927041266026\n",
            "The mean energy for the training iteration 89 is -0.9665689867968612\n",
            "The mean energy for the training iteration 90 is -0.8820009574594322\n",
            "The mean energy for the training iteration 91 is -0.7977323877511658\n",
            "The mean energy for the training iteration 92 is -0.8918551509352444\n",
            "The mean energy for the training iteration 93 is -0.9386789642017561\n",
            "The mean energy for the training iteration 94 is -0.9428279830494173\n",
            "The mean energy for the training iteration 95 is -0.9203976307657098\n",
            "The mean energy for the training iteration 96 is -0.8966204553816349\n",
            "The mean energy for the training iteration 97 is -0.9396807634909125\n",
            "The mean energy for the training iteration 98 is -0.9814526279968099\n",
            "The mean energy for the training iteration 99 is -0.981814370630495\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Access the final_energy attribute of the mol object to see the net average ground state energy of the training."
      ],
      "metadata": {
        "id": "huSbfaBDLy1h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "mol.final_energy.item()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "_RoWdFx8Z5ZG",
        "outputId": "5695d21c-16f7-43ca-c03a-58272af46e9a"
      },
      "execution_count": 27,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "-1.0981049045352125"
            ]
          },
          "metadata": {},
          "execution_count": 27
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "The calculated ground state energy closely matches with the exact value of 1.174476 Hartrees.\n",
        "\n",
        "To get more accurate results for the energy, try running for more iterations or tune the parameters like MCMC step proposals, layer sizes, etc."
      ],
      "metadata": {
        "id": "VEOfLOfDC4DO"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Congratulations! Time to join the Community!\n",
        "\n",
        "Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\n",
        "\n",
        "\n",
        "## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)\n",
        "This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\n",
        "\n",
        "\n",
        "## Join the DeepChem Gitter\n",
        "The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!"
      ],
      "metadata": {
        "id": "2dLi0sw8MmiY"
      }
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}