{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 药物靶点相互作用预测"
   ]
  },
  {
   "source": [
    "在此教程中，我们将展示如何使用MolTrans模型进行药物与靶点蛋白相互作用的预测。具体来说，我们将分别介绍在分类与回归任务中，如何运用MolTrans模型进行训练、评估和测试。MolTrans模型的具体实现请参阅`/apps/drug_target_interaction/moltrans_dti/`目录下的代码."
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# MolTrans"
   ]
  },
  {
   "source": [
    "**MolTrans**是一个基于分子交互作用的Transformer结构，用于药物与靶点蛋白相互作用预测任务。它利用海量无标注的生物医学领域数据来萃取高质量的药物和靶点蛋白的子结构。具体来说，基于BPE的分词方法首先会将输入的药物和靶点蛋白序列进行分解，接着FCS模块会根据对应词表中的词频进行子结构的拼接与融合。然后，隐空间表征通过改进型Transformer会分别得到药物和靶点蛋白子结构的embeddings。接下来，在交互模块中，药物分子子结构和靶点蛋白分子子结构进行邻接融合并计算得到成对的交互分数。随后，交互图通过CNN层进一步提取更高阶的交互信息。最后，药物和靶点蛋白的交互信息通过decoder模块输出药物和靶点蛋白的亲和力打分或者成对的概率分数。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "![title](./figures/moltrans_model.png)"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "MolTrans的代码实现请参阅`/apps/drug_target_interaction/moltrans_dti/`目录中的内容，为执行后面的代码片段，需要将运行路径切换到该目录下（如下所示）。"
   ]
  },
  {
   "source": [
    "import os\n",
    "import sys\n",
    "\n",
    "sys.path.insert(0, os.path.abspath(os.path.join(os.getcwd(), \"..\")))\n",
    "os.chdir('../apps/drug_target_interaction/moltrans_dti/')\n",
    "os.listdir(os.getcwd())"
   ],
   "cell_type": "code",
   "metadata": {},
   "execution_count": 0,
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "['train_cls.py',\n",
       " 'train_reg.py',\n",
       " 'README.md',\n",
       " 'pretrained_model',\n",
       " 'LICENSE',\n",
       " 'config.json',\n",
       " '.DS_Store',\n",
       " 'helper',\n",
       " 'vocabulary',\n",
       " 'double_towers.py',\n",
       " 'util_function.py',\n",
       " 'finetune_model',\n",
       " 'preprocess.py',\n",
       " 'requirement.txt']"
      ]
     },
     "metadata": {},
     "execution_count": 0
    }
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据集准备"
   ]
  },
  {
   "source": [
    "使用`wget`命令下载所有需要用到的DTI数据集。如果你的本地计算机上没有`wget`，你也可以复制下面的链接到你的浏览器中来下载数据。但是请注意你需要把数据集移动到`/apps/drug_target_interaction/moltrans_dti/`这个路径下。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "# 下载并解压数据\n",
    "!wget \"https://baidu-nlp.bj.bcebos.com/PaddleHelix/datasets/dti_datasets/dti_dataset.tgz\" --no-check-certificate\n",
    "!tar -zxvf \"dti_dataset.tgz\"\n",
    "!ls \"./dataset\""
   ],
   "cell_type": "code",
   "metadata": {},
   "execution_count": 1,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "--2021-05-10 13:15:07--  https://baidu-nlp.bj.bcebos.com/PaddleHelix/datasets/dti_datasets/dti_dataset.tgz\n",
      "Connecting to 172.19.61.250:3128... connected.\n",
      "WARNING: certificate common name \"*.bcebos.com\" doesn't match requested host name \"baidu-nlp.bj.bcebos.com\".\n",
      "Proxy request sent, awaiting response... 200 OK\n",
      "Length: 196384974 (187M) [application/gzip]\n",
      "Saving to: \"dti_dataset.tgz\"\n",
      "\n",
      "100%[======================================>] 196,384,974 11.3M/s   in 21s     \n",
      "\n",
      "2021-05-10 13:15:30 (8.86 MB/s) - \"dti_dataset.tgz\" saved [196384974/196384974]\n",
      "\n",
      "./dataset/\n",
      "./dataset/classification/\n",
      "./dataset/._.DS_Store\n",
      "./dataset/.DS_Store\n",
      "./dataset/regression/\n",
      "./dataset/regression/benchmark/\n",
      "./dataset/regression/._DAVIS\n",
      "./dataset/regression/DAVIS/\n",
      "./dataset/regression/._.DS_Store\n",
      "./dataset/regression/.DS_Store\n",
      "./dataset/regression/._BindingDB\n",
      "./dataset/regression/BindingDB/\n",
      "./dataset/regression/._KIBA\n",
      "./dataset/regression/KIBA/\n",
      "./dataset/regression/._ChEMBL\n",
      "./dataset/regression/ChEMBL/\n",
      "./dataset/regression/ChEMBL/._.DS_Store\n",
      "./dataset/regression/ChEMBL/.DS_Store\n",
      "./dataset/regression/ChEMBL/._Chem_SMILES.txt\n",
      "./dataset/regression/ChEMBL/Chem_SMILES.txt\n",
      "./dataset/regression/ChEMBL/._Chem_Affinity.txt\n",
      "./dataset/regression/ChEMBL/Chem_Affinity.txt\n",
      "./dataset/regression/ChEMBL/._ChEMBL cleanup.R\n",
      "./dataset/regression/ChEMBL/ChEMBL cleanup.R\n",
      "./dataset/regression/ChEMBL/._Chem_Kd_nM.txt\n",
      "./dataset/regression/ChEMBL/Chem_Kd_nM.txt\n",
      "./dataset/regression/ChEMBL/._ChEMBL_Target_Sequence.txt\n",
      "./dataset/regression/ChEMBL/ChEMBL_Target_Sequence.txt\n",
      "./dataset/regression/ChEMBL/Chem_SMILES_only.txt\n",
      "./dataset/regression/KIBA/._.DS_Store\n",
      "./dataset/regression/KIBA/.DS_Store\n",
      "./dataset/regression/KIBA/._affinity.txt\n",
      "./dataset/regression/KIBA/affinity.txt\n",
      "./dataset/regression/KIBA/._SMILES.txt\n",
      "./dataset/regression/KIBA/SMILES.txt\n",
      "./dataset/regression/KIBA/._target_seq.txt\n",
      "./dataset/regression/KIBA/target_seq.txt\n",
      "./dataset/regression/BindingDB/._BindingDB_SMILES.txt\n",
      "./dataset/regression/BindingDB/BindingDB_SMILES.txt\n",
      "./dataset/regression/BindingDB/._BindingDB_Target_Sequence.txt\n",
      "./dataset/regression/BindingDB/BindingDB_Target_Sequence.txt\n",
      "./dataset/regression/BindingDB/BindingDB_SMILES_new.txt\n",
      "./dataset/regression/BindingDB/._.DS_Store\n",
      "./dataset/regression/BindingDB/.DS_Store\n",
      "./dataset/regression/BindingDB/BindingDB_Target_Sequence_new.txt\n",
      "./dataset/regression/BindingDB/._bindingdb cleanup.R\n",
      "./dataset/regression/BindingDB/bindingdb cleanup.R\n",
      "./dataset/regression/BindingDB/._BindingDB_Kd.txt\n",
      "./dataset/regression/BindingDB/BindingDB_Kd.txt\n",
      "./dataset/regression/DAVIS/._.DS_Store\n",
      "./dataset/regression/DAVIS/.DS_Store\n",
      "./dataset/regression/DAVIS/._affinity.txt\n",
      "./dataset/regression/DAVIS/affinity.txt\n",
      "./dataset/regression/DAVIS/._SMILES.txt\n",
      "./dataset/regression/DAVIS/SMILES.txt\n",
      "./dataset/regression/DAVIS/._target_seq.txt\n",
      "./dataset/regression/DAVIS/target_seq.txt\n",
      "./dataset/regression/benchmark/._.DS_Store\n",
      "./dataset/regression/benchmark/.DS_Store\n",
      "./dataset/regression/benchmark/._KIBAtest\n",
      "./dataset/regression/benchmark/KIBAtest/\n",
      "./dataset/regression/benchmark/._DAVIStest\n",
      "./dataset/regression/benchmark/DAVIStest/\n",
      "./dataset/regression/benchmark/DAVIStest/._proteins.txt\n",
      "./dataset/regression/benchmark/DAVIStest/proteins.txt\n",
      "./dataset/regression/benchmark/DAVIStest/._.DS_Store\n",
      "./dataset/regression/benchmark/DAVIStest/.DS_Store\n",
      "./dataset/regression/benchmark/DAVIStest/._affinity.txt\n",
      "./dataset/regression/benchmark/DAVIStest/affinity.txt\n",
      "./dataset/regression/benchmark/DAVIStest/._folds\n",
      "./dataset/regression/benchmark/DAVIStest/folds/\n",
      "./dataset/regression/benchmark/DAVIStest/._ligands_can.txt\n",
      "./dataset/regression/benchmark/DAVIStest/ligands_can.txt\n",
      "./dataset/regression/benchmark/DAVIStest/._SMILES.txt\n",
      "./dataset/regression/benchmark/DAVIStest/SMILES.txt\n",
      "./dataset/regression/benchmark/DAVIStest/._target_seq.txt\n",
      "./dataset/regression/benchmark/DAVIStest/target_seq.txt\n",
      "./dataset/regression/benchmark/DAVIStest/._processed\n",
      "./dataset/regression/benchmark/DAVIStest/processed/\n",
      "./dataset/regression/benchmark/DAVIStest/._Y\n",
      "./dataset/regression/benchmark/DAVIStest/Y\n",
      "./dataset/regression/benchmark/DAVIStest/processed/._.DS_Store\n",
      "./dataset/regression/benchmark/DAVIStest/processed/.DS_Store\n",
      "./dataset/regression/benchmark/DAVIStest/processed/._test\n",
      "./dataset/regression/benchmark/DAVIStest/processed/test/\n",
      "./dataset/regression/benchmark/DAVIStest/processed/._train\n",
      "./dataset/regression/benchmark/DAVIStest/processed/train/\n",
      "./dataset/regression/benchmark/DAVIStest/processed/train/._davis_train_0.npz\n",
      "./dataset/regression/benchmark/DAVIStest/processed/train/davis_train_0.npz\n",
      "./dataset/regression/benchmark/DAVIStest/processed/test/._davis_test_0.npz\n",
      "./dataset/regression/benchmark/DAVIStest/processed/test/davis_test_0.npz\n",
      "./dataset/regression/benchmark/DAVIStest/folds/._test_fold_setting1.txt\n",
      "./dataset/regression/benchmark/DAVIStest/folds/test_fold_setting1.txt\n",
      "./dataset/regression/benchmark/DAVIStest/folds/._train_fold_setting1.txt\n",
      "./dataset/regression/benchmark/DAVIStest/folds/train_fold_setting1.txt\n",
      "./dataset/regression/benchmark/KIBAtest/._proteins.txt\n",
      "./dataset/regression/benchmark/KIBAtest/proteins.txt\n",
      "./dataset/regression/benchmark/KIBAtest/._.DS_Store\n",
      "./dataset/regression/benchmark/KIBAtest/.DS_Store\n",
      "./dataset/regression/benchmark/KIBAtest/._folds\n",
      "./dataset/regression/benchmark/KIBAtest/folds/\n",
      "./dataset/regression/benchmark/KIBAtest/._ligands_can.txt\n",
      "./dataset/regression/benchmark/KIBAtest/ligands_can.txt\n",
      "./dataset/regression/benchmark/KIBAtest/._processed\n",
      "./dataset/regression/benchmark/KIBAtest/processed/\n",
      "./dataset/regression/benchmark/KIBAtest/._Y\n",
      "./dataset/regression/benchmark/KIBAtest/Y\n",
      "./dataset/regression/benchmark/KIBAtest/processed/._.DS_Store\n",
      "./dataset/regression/benchmark/KIBAtest/processed/.DS_Store\n",
      "./dataset/regression/benchmark/KIBAtest/processed/._test\n",
      "./dataset/regression/benchmark/KIBAtest/processed/test/\n",
      "./dataset/regression/benchmark/KIBAtest/processed/._train\n",
      "./dataset/regression/benchmark/KIBAtest/processed/train/\n",
      "./dataset/regression/benchmark/KIBAtest/processed/train/._kiba_train_0.npz\n",
      "./dataset/regression/benchmark/KIBAtest/processed/train/kiba_train_0.npz\n",
      "./dataset/regression/benchmark/KIBAtest/processed/test/._kiba_test_0.npz\n",
      "./dataset/regression/benchmark/KIBAtest/processed/test/kiba_test_0.npz\n",
      "./dataset/regression/benchmark/KIBAtest/folds/._test_fold_setting1.txt\n",
      "./dataset/regression/benchmark/KIBAtest/folds/test_fold_setting1.txt\n",
      "./dataset/regression/benchmark/KIBAtest/folds/._train_fold_setting1.txt\n",
      "./dataset/regression/benchmark/KIBAtest/folds/train_fold_setting1.txt\n",
      "./dataset/classification/._DAVIS\n",
      "./dataset/classification/DAVIS/\n",
      "./dataset/classification/._.DS_Store\n",
      "./dataset/classification/.DS_Store\n",
      "./dataset/classification/._BIOSNAP\n",
      "./dataset/classification/BIOSNAP/\n",
      "./dataset/classification/._BindingDB\n",
      "./dataset/classification/BindingDB/\n",
      "./dataset/classification/BindingDB/._.DS_Store\n",
      "./dataset/classification/BindingDB/.DS_Store\n",
      "./dataset/classification/BindingDB/._val.csv\n",
      "./dataset/classification/BindingDB/val.csv\n",
      "./dataset/classification/BindingDB/._test.csv\n",
      "./dataset/classification/BindingDB/test.csv\n",
      "./dataset/classification/BindingDB/._train.csv\n",
      "./dataset/classification/BindingDB/train.csv\n",
      "./dataset/classification/BIOSNAP/._.DS_Store\n",
      "./dataset/classification/BIOSNAP/.DS_Store\n",
      "./dataset/classification/BIOSNAP/._unseen_drug\n",
      "./dataset/classification/BIOSNAP/unseen_drug/\n",
      "./dataset/classification/BIOSNAP/._unseen_protein\n",
      "./dataset/classification/BIOSNAP/unseen_protein/\n",
      "./dataset/classification/BIOSNAP/._missing_data\n",
      "./dataset/classification/BIOSNAP/missing_data/\n",
      "./dataset/classification/BIOSNAP/._full_data\n",
      "./dataset/classification/BIOSNAP/full_data/\n",
      "./dataset/classification/BIOSNAP/full_data/._val.csv\n",
      "./dataset/classification/BIOSNAP/full_data/val.csv\n",
      "./dataset/classification/BIOSNAP/full_data/._test.csv\n",
      "./dataset/classification/BIOSNAP/full_data/test.csv\n",
      "./dataset/classification/BIOSNAP/full_data/._train.csv\n",
      "./dataset/classification/BIOSNAP/full_data/train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/._95\n",
      "./dataset/classification/BIOSNAP/missing_data/95/\n",
      "./dataset/classification/BIOSNAP/missing_data/._.DS_Store\n",
      "./dataset/classification/BIOSNAP/missing_data/.DS_Store\n",
      "./dataset/classification/BIOSNAP/missing_data/._80\n",
      "./dataset/classification/BIOSNAP/missing_data/80/\n",
      "./dataset/classification/BIOSNAP/missing_data/._90\n",
      "./dataset/classification/BIOSNAP/missing_data/90/\n",
      "./dataset/classification/BIOSNAP/missing_data/._70\n",
      "./dataset/classification/BIOSNAP/missing_data/70/\n",
      "./dataset/classification/BIOSNAP/missing_data/70/._val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/70/val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/70/._test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/70/test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/70/._train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/70/train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/90/._val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/90/val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/90/._test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/90/test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/90/._train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/90/train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/80/._val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/80/val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/80/._test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/80/test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/80/._train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/80/train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/95/._val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/95/val.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/95/._test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/95/test.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/95/._train.csv\n",
      "./dataset/classification/BIOSNAP/missing_data/95/train.csv\n",
      "./dataset/classification/BIOSNAP/unseen_protein/._val.csv\n",
      "./dataset/classification/BIOSNAP/unseen_protein/val.csv\n",
      "./dataset/classification/BIOSNAP/unseen_protein/._test.csv\n",
      "./dataset/classification/BIOSNAP/unseen_protein/test.csv\n",
      "./dataset/classification/BIOSNAP/unseen_protein/._train.csv\n",
      "./dataset/classification/BIOSNAP/unseen_protein/train.csv\n",
      "./dataset/classification/BIOSNAP/unseen_drug/._val.csv\n",
      "./dataset/classification/BIOSNAP/unseen_drug/val.csv\n",
      "./dataset/classification/BIOSNAP/unseen_drug/._test.csv\n",
      "./dataset/classification/BIOSNAP/unseen_drug/test.csv\n",
      "./dataset/classification/BIOSNAP/unseen_drug/._train.csv\n",
      "./dataset/classification/BIOSNAP/unseen_drug/train.csv\n",
      "./dataset/classification/DAVIS/._val.csv\n",
      "./dataset/classification/DAVIS/val.csv\n",
      "./dataset/classification/DAVIS/._test.csv\n",
      "./dataset/classification/DAVIS/test.csv\n",
      "./dataset/classification/DAVIS/._train.csv\n",
      "./dataset/classification/DAVIS/train.csv\n",
      "classification\tregression\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 环境安装"
   ]
  },
  {
   "source": [
    "在开始运行模型前，我们需要安装`requirement.txt`中要求的所有依赖包和环境."
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "['paddlepaddle==2.0.2',\n",
       " 'visualdl==2.1.1',\n",
       " 'scikit-learn==0.24.1',\n",
       " 'scipy==1.6.1',\n",
       " 'subword-nmt==0.3.7',\n",
       " 'PyYAML==5.4.1',\n",
       " 'numpy==1.19.5',\n",
       " 'pandas==1.2.3']"
      ]
     },
     "metadata": {},
     "execution_count": 2
    }
   ],
   "source": [
    "file1 = open(\"requirement.txt\",\"r\")\n",
    "file1.read().splitlines()"
   ]
  },
  {
   "source": [
    "## 模型初始化"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "首先，导入所有相关的包和模块。关于`MolTransModel`的具体实现，请参阅脚本`double_towers.py`."
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "import paddle\n",
    "import numpy as np\n",
    "from paddle import nn\n",
    "from paddle import io\n",
    "from helper import utils\n",
    "from double_towers import MolTransModel"
   ],
   "cell_type": "code",
   "metadata": {},
   "execution_count": 3,
   "outputs": []
  },
  {
   "source": [
    "接着从`config.json`文件中加载模型相关的所有默认超参数和配置。你可以根据自己需求更改超参和配置。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr = 5e-4                             # 学习率\n",
    "model_config = {\n",
    "    \"drug_max_seq\": 50,               # 药物序列最大长度\n",
    "    \"target_max_seq\": 545,            # 蛋白序列最大长度\n",
    "    \"emb_size\": 384,                  # Embedding矩阵的维度\n",
    "    \"input_drug_dim\": 23532,          # 药物词表的长度\n",
    "    \"input_target_dim\": 16693,        # 蛋白词表的长度\n",
    "    \"interm_size\": 1536,              # 隐空间大小\n",
    "    \"num_attention_heads\": 12,        # 注意力头的数目\n",
    "    \"flatten_dim\": 81750,             # Flatten的维度 \n",
    "    \"layer_size\": 2,                  # Transformer的层数\n",
    "    \"dropout_ratio\": 0.1,             # Dropout概率\n",
    "    \"attention_dropout_ratio\": 0.1,   # 注意力Dropout概率\n",
    "    \"hidden_dropout_ratio\": 0.1       # 隐层Dropout概率\n",
    "}"
   ]
  },
  {
   "source": [
    "设置种子数、机器和GPU用于后续任务。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 设置种子数\n",
    "paddle.seed(2)\n",
    "np.random.seed(3)\n",
    "\n",
    "# 设置GPU机器 CUDA_VISIBLE_DEVICES='你的机器编号'\n",
    "use_cuda = paddle.is_compiled_with_cuda()\n",
    "device = 'cuda:0' if use_cuda else 'cpu'\n",
    "device = device.replace('cuda', 'gpu')\n",
    "device = paddle.set_device(device)"
   ]
  },
  {
   "source": [
    "然后，我们根据指定的配置参数进行模型初始化。我们这里使用的优化器是Adam。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "model = MolTransModel(model_config)\n",
    "model = model.cuda()\n",
    "optim = utils.Adam(parameters=model.parameters(), learning_rate=lr)"
   ],
   "cell_type": "code",
   "metadata": {},
   "execution_count": 6,
   "outputs": []
  },
  {
   "source": [
    "## DTI分类任务"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "在此教程中，我们以DAVIS分类数据集为例。对于DTI分类任务，我们定义所有Kd值大于30的药物靶点蛋白对为正标签数据。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 数据预处理"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "读取DAVIS分类数据集中的训练集、验证集和测试集。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "2086 3006 6011\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "data_path = './dataset/classification/DAVIS'\n",
    "training_set = pd.read_csv(data_path + '/train.csv')\n",
    "validation_set = pd.read_csv(data_path + '/val.csv')\n",
    "testing_set = pd.read_csv(data_path + '/test.csv')\n",
    "print(len(training_set), len(validation_set), len(testing_set))"
   ]
  },
  {
   "source": [
    "使用`DataEncoder`和`DataLoader`模块来处理输入数据。关于`DataEncoder`的具体实现，请参阅脚本`preprocess.py`。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "import paddle\n",
    "from helper import utils\n",
    "from preprocess import DataEncoder\n",
    "\n",
    "training_data = DataEncoder(training_set.index.values, training_set.Label.values, training_set)\n",
    "train_loader = utils.BaseDataLoader(training_data, batch_size=64, shuffle=True, \n",
    "                                        drop_last=False, num_workers=0)\n",
    "validation_data = DataEncoder(validation_set.index.values, validation_set.Label.values, validation_set)\n",
    "validation_loader = utils.BaseDataLoader(validation_data, batch_size=64, shuffle=False, \n",
    "                                        drop_last=False, num_workers=0)\n",
    "testing_data = DataEncoder(testing_set.index.values, testing_set.Label.values, testing_set)\n",
    "testing_loader = utils.BaseDataLoader(testing_data, batch_size=64, shuffle=False, \n",
    "                                        drop_last=False, num_workers=0)"
   ],
   "cell_type": "code",
   "metadata": {},
   "execution_count": 8,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 训练、验证和测试"
   ]
  },
  {
   "source": [
    "**基本设置**。为了获得更好实验效果，建议`max_epoch`至少设置为**200**。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import paddle\n",
    "from paddle import nn\n",
    "\n",
    "# 基本设置\n",
    "optimal_auc = 0\n",
    "log_iter = 50\n",
    "log_step = 0\n",
    "max_epoch = 10\n",
    "\n",
    "# 设置损失函数\n",
    "sig = paddle.nn.Sigmoid()\n",
    "loss_fn = paddle.nn.BCELoss()"
   ]
  },
  {
   "source": [
    "**模型训练**。通过读取`train_loader`中的数据，模型按batch进行训练，BCELoss用于DTI分类任务。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "# 训练\n",
    "for epoch in range(max_epoch):\n",
    "    print(\"=====Start Training=====\")\n",
    "    model.train()\n",
    "    for batch_id, data in enumerate(train_loader):\n",
    "        d_out, mask_d_out, t_out, mask_t_out, label = data\n",
    "        temp = model(d_out.long().cuda(), t_out.long().cuda(), mask_d_out.long().cuda(), mask_t_out.long().cuda())\n",
    "        label = paddle.cast(label, \"float32\")\n",
    "        predicts = paddle.squeeze(sig(temp))\n",
    "        loss = loss_fn(predicts, label)\n",
    "\n",
    "        optim.clear_grad()\n",
    "        loss.backward()\n",
    "        optim.step()\n",
    "\n",
    "        if batch_id % log_iter == 0:\n",
    "            print(\"Training at epoch: {}, step: {}, loss is: {}\".format(epoch, batch_id, loss.cpu().detach().numpy()))\n",
    "            log_step += 1  "
   ],
   "cell_type": "code",
   "metadata": {},
   "execution_count": 10,
   "outputs": []
  },
  {
   "source": [
    "**分类任务的验证函数**。我们用AUROC、AUPRC、Precision、Recall、Accuracy等不同指标进行验证。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import (roc_auc_score, average_precision_score, f1_score, roc_curve, confusion_matrix, \n",
    "                             precision_score, recall_score, auc, mean_squared_error)\n",
    "\n",
    "# 验证函数\n",
    "def cls_test(data_generator, model):\n",
    "    \"\"\"\n",
    "    Test for classification task\n",
    "    \"\"\"\n",
    "    y_pred = []\n",
    "    y_label = []\n",
    "    loss_res = 0.0\n",
    "    count = 0.0\n",
    "\n",
    "    model.eval()    \n",
    "    for _, data in enumerate(data_generator):\n",
    "        d_out, mask_d_out, t_out, mask_t_out, label = data\n",
    "        temp = model(d_out.long().cuda(), t_out.long().cuda(), mask_d_out.long().cuda(), mask_t_out.long().cuda())\n",
    "        predicts = paddle.squeeze(sig(temp))\n",
    "        label = paddle.cast(label, \"float32\")\n",
    "\n",
    "        loss = loss_fn(predicts, label)\n",
    "        loss_res += loss\n",
    "        count += 1\n",
    "\n",
    "        predicts = predicts.detach().cpu().numpy()\n",
    "        label_id = label.to('cpu').numpy()\n",
    "        y_label = y_label + label_id.flatten().tolist()\n",
    "        y_pred = y_pred + predicts.flatten().tolist()\n",
    "    loss = loss_res / count\n",
    "\n",
    "    fpr, tpr, threshold = roc_curve(y_label, y_pred)\n",
    "    precision = tpr / (tpr + fpr)\n",
    "    f1 = 2 * precision * tpr / (tpr + precision + 1e-05)\n",
    "    optimal_threshold = threshold[5:][np.argmax(f1[5:])]\n",
    "    print(\"Optimal threshold: {}\".format(optimal_threshold))\n",
    "\n",
    "    y_pred_res = [(1 if i else 0) for i in y_pred >= optimal_threshold]\n",
    "    auroc = auc(fpr, tpr)\n",
    "    print(\"AUROC: {}\".format(auroc))\n",
    "    print(\"AUPRC: {}\".format(average_precision_score(y_label, y_pred)))\n",
    "\n",
    "    cf_mat = confusion_matrix(y_label, y_pred_res)\n",
    "    print(\"Confusion Matrix: \\n{}\".format(cf_mat))\n",
    "    print(\"Precision: {}\".format(precision_score(y_label, y_pred_res)))\n",
    "    print(\"Recall: {}\".format(recall_score(y_label, y_pred_res)))\n",
    "\n",
    "    total_res = sum(sum(cf_mat))\n",
    "    accuracy = (cf_mat[0, 0] + cf_mat[1, 1]) / total_res\n",
    "    print(\"Accuracy: {}\".format(accuracy))\n",
    "    sensitivity = cf_mat[0, 0] / (cf_mat[0, 0] + cf_mat[0, 1])\n",
    "    print(\"Sensitivity: {}\".format(sensitivity))\n",
    "    specificity = cf_mat[1, 1] / (cf_mat[1, 0] + cf_mat[1, 1])\n",
    "    print(\"Specificity: {}\".format(specificity))\n",
    "    outputs = np.asarray([(1 if i else 0) for i in np.asarray(y_pred) >= 0.5])\n",
    "    return (roc_auc_score(y_label, y_pred), \n",
    "            f1_score(y_label, outputs), loss.item())"
   ]
  },
  {
   "source": [
    "**模型验证**。AUROC指标用来评定模型效果，这里我们用更好的AUROC指标来选择最佳模型。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 验证\n",
    "print(\"=====Start Validation=====\")\n",
    "with paddle.no_grad():\n",
    "    auroc, f1, loss = cls_test(validation_loader, model) \n",
    "    print(\"Validation at epoch: {}, AUROC: {}, F1: {}, loss is: {}\".format(epoch, auroc, f1, loss))\n",
    "        \n",
    "    # 保存最佳模型\n",
    "    if auroc > optimal_auc:\n",
    "        optimal_auc = auroc\n",
    "        print(\"Saving the best_model...\")\n",
    "        print(\"Best AUROC: {}\".format(optimal_auc))\n",
    "        paddle.save(model.state_dict(), 'DAVIS_bestAUC_model_cls1')"
   ]
  },
  {
   "source": [
    "**模型测试**。读取最佳模型并进行测试。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 读取预训练模型\n",
    "params_dict= paddle.load('DAVIS_bestAUC_model_cls1')\n",
    "model.set_dict(params_dict)\n",
    "\n",
    "# 测试\n",
    "print(\"=====Start Testing=====\")\n",
    "with paddle.no_grad():\n",
    "    try:\n",
    "        auroc, f1, loss = cls_test(testing_loader, model)\n",
    "        print(\"Testing result: AUROC: {}, F1: {}, Testing loss is: {}\".format(auroc, f1, loss))\n",
    "    except:\n",
    "        print(\"Testing failed...\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Starting Time: 1620636839.3391168\n",
      "W0510 16:53:59.340250 20693 device_context.cc:320] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.2, Runtime API Version: 10.1\n",
      "W0510 16:53:59.344445 20693 device_context.cc:330] device: 0, cuDNN Version: 7.6.\n",
      "=====Start Initial Testing=====\n",
      "Optimal threshold: 0.08874599635601044\n",
      "AUROC: 0.4449206833787793\n",
      "AUPRC: 0.04431604679412547\n",
      "Confusion Matrix: \n",
      "[[   1 5707]\n",
      " [   0  303]]\n",
      "Precision: 0.05041597337770383\n",
      "Recall: 1.0\n",
      "Accuracy: 0.05057394776243553\n",
      "Sensitivity: 0.0001751927119831815\n",
      "Specificity: 1.0\n",
      "Initial testing set: AUROC: 0.4449206833787793, F1: 0.0625, Testing loss: 0.5268805027008057\n",
      "=====Start Training=====\n",
      "Training at epoch: 0, step: 0, loss is: [0.77942777]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.46215638518333435\n",
      "AUROC: 0.7542153460997891\n",
      "AUPRC: 0.16601991199123586\n",
      "Confusion Matrix: \n",
      "[[1357 1489]\n",
      " [  17  143]]\n",
      "Precision: 0.08762254901960784\n",
      "Recall: 0.89375\n",
      "Accuracy: 0.499001996007984\n",
      "Sensitivity: 0.4768095572733661\n",
      "Specificity: 0.89375\n",
      "Validation at epoch: 0, AUROC: 0.7542153460997891, F1: 0.0, loss is: 0.6283206343650818\n",
      "Saving the best_model...\n",
      "Best AUROC: 0.7542153460997891\n",
      "=====Start Training=====\n",
      "Training at epoch: 1, step: 0, loss is: [0.70703137]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.4707314074039459\n",
      "AUROC: 0.7285137034434294\n",
      "AUPRC: 0.1615263449981204\n",
      "Confusion Matrix: \n",
      "[[1791 1055]\n",
      " [  41  119]]\n",
      "Precision: 0.10136286201022146\n",
      "Recall: 0.74375\n",
      "Accuracy: 0.635395874916833\n",
      "Sensitivity: 0.6293042867182009\n",
      "Specificity: 0.74375\n",
      "Validation at epoch: 1, AUROC: 0.7285137034434294, F1: 0.0, loss is: 0.6421152949333191\n",
      "=====Start Training=====\n",
      "Training at epoch: 2, step: 0, loss is: [0.69974554]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.5383076071739197\n",
      "AUROC: 0.6952180692199577\n",
      "AUPRC: 0.13936352003321878\n",
      "Confusion Matrix: \n",
      "[[ 751 2095]\n",
      " [  13  147]]\n",
      "Precision: 0.0655664585191793\n",
      "Recall: 0.91875\n",
      "Accuracy: 0.2987358616101131\n",
      "Sensitivity: 0.26387912860154605\n",
      "Specificity: 0.91875\n",
      "Validation at epoch: 2, AUROC: 0.6952180692199577, F1: 0.1010739102969046, loss is: 0.7653073072433472\n",
      "=====Start Training=====\n",
      "Training at epoch: 3, step: 0, loss is: [0.70805895]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.4703468978404999\n",
      "AUROC: 0.7167746398453969\n",
      "AUPRC: 0.17004080969611768\n",
      "Confusion Matrix: \n",
      "[[ 732 2114]\n",
      " [  11  149]]\n",
      "Precision: 0.06584180291648255\n",
      "Recall: 0.93125\n",
      "Accuracy: 0.2930805056553559\n",
      "Sensitivity: 0.2572030920590302\n",
      "Specificity: 0.93125\n",
      "Validation at epoch: 3, AUROC: 0.7167746398453969, F1: 0.0, loss is: 0.642682671546936\n",
      "=====Start Training=====\n",
      "Training at epoch: 4, step: 0, loss is: [0.68673825]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.4861532747745514\n",
      "AUROC: 0.7332912420941673\n",
      "AUPRC: 0.18720024663689563\n",
      "Confusion Matrix: \n",
      "[[1727 1119]\n",
      " [  41  119]]\n",
      "Precision: 0.09612277867528271\n",
      "Recall: 0.74375\n",
      "Accuracy: 0.614105123087159\n",
      "Sensitivity: 0.606816584680253\n",
      "Specificity: 0.74375\n",
      "Validation at epoch: 4, AUROC: 0.7332912420941673, F1: 0.0, loss is: 0.667766809463501\n",
      "=====Start Training=====\n",
      "Training at epoch: 5, step: 0, loss is: [0.6912267]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.5331630706787109\n",
      "AUROC: 0.7357244817287422\n",
      "AUPRC: 0.2135747002280149\n",
      "Confusion Matrix: \n",
      "[[1786 1060]\n",
      " [  41  119]]\n",
      "Precision: 0.10093299406276506\n",
      "Recall: 0.74375\n",
      "Accuracy: 0.6337325349301397\n",
      "Sensitivity: 0.6275474349964862\n",
      "Specificity: 0.74375\n",
      "Validation at epoch: 5, AUROC: 0.7357244817287422, F1: 0.10243111831442463, loss is: 0.7465436458587646\n",
      "=====Start Training=====\n",
      "Training at epoch: 6, step: 0, loss is: [0.70050865]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.2806451916694641\n",
      "AUROC: 0.7899156711173577\n",
      "AUPRC: 0.25305362270837706\n",
      "Confusion Matrix: \n",
      "[[1881  965]\n",
      " [  31  129]]\n",
      "Precision: 0.11791590493601463\n",
      "Recall: 0.80625\n",
      "Accuracy: 0.6686626746506986\n",
      "Sensitivity: 0.6609276177090654\n",
      "Specificity: 0.80625\n",
      "Validation at epoch: 6, AUROC: 0.7899156711173577, F1: 0.3193612774451098, loss is: 0.41791725158691406\n",
      "Saving the best_model...\n",
      "Best AUROC: 0.7899156711173577\n",
      "=====Start Training=====\n",
      "Training at epoch: 7, step: 0, loss is: [0.5820346]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.3972145915031433\n",
      "AUROC: 0.7961008872101194\n",
      "AUPRC: 0.2592659166374804\n",
      "Confusion Matrix: \n",
      "[[2133  713]\n",
      " [  32  128]]\n",
      "Precision: 0.15219976218787157\n",
      "Recall: 0.8\n",
      "Accuracy: 0.7521623419827013\n",
      "Sensitivity: 0.7494729444834856\n",
      "Specificity: 0.8\n",
      "Validation at epoch: 7, AUROC: 0.7961008872101194, F1: 0.2810945273631841, loss is: 0.4330136477947235\n",
      "Saving the best_model...\n",
      "Best AUROC: 0.7961008872101194\n",
      "=====Start Training=====\n",
      "Training at epoch: 8, step: 0, loss is: [0.49043196]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.48633480072021484\n",
      "AUROC: 0.8339094342937456\n",
      "AUPRC: 0.27809044020235774\n",
      "Confusion Matrix: \n",
      "[[2337  509]\n",
      " [  32  128]]\n",
      "Precision: 0.20094191522762953\n",
      "Recall: 0.8\n",
      "Accuracy: 0.8200266134397871\n",
      "Sensitivity: 0.8211524947294448\n",
      "Specificity: 0.8\n",
      "Validation at epoch: 8, AUROC: 0.8339094342937456, F1: 0.3214285714285714, loss is: 0.4535229206085205\n",
      "Saving the best_model...\n",
      "Best AUROC: 0.8339094342937456\n",
      "=====Start Training=====\n",
      "Training at epoch: 9, step: 0, loss is: [0.52378976]\n",
      "=====Start Validation=====\n",
      "Optimal threshold: 0.252903550863266\n",
      "AUROC: 0.806647487702038\n",
      "AUPRC: 0.28221522256629755\n",
      "Confusion Matrix: \n",
      "[[2353  493]\n",
      " [  39  121]]\n",
      "Precision: 0.1970684039087948\n",
      "Recall: 0.75625\n",
      "Accuracy: 0.823020625415835\n",
      "Sensitivity: 0.8267744202389319\n",
      "Specificity: 0.75625\n",
      "Validation at epoch: 9, AUROC: 0.806647487702038, F1: 0.3509369676320272, loss is: 0.39919421076774597\n",
      "Final AUROC: 0.8339094342937456\n",
      "=====Start Testing=====\n",
      "Optimal threshold: 0.4327976107597351\n",
      "AUROC: 0.8565241072110014\n",
      "AUPRC: 0.27200004740468986\n",
      "Confusion Matrix: \n",
      "[[4538 1170]\n",
      " [  59  244]]\n",
      "Precision: 0.17256011315417255\n",
      "Recall: 0.8052805280528053\n",
      "Accuracy: 0.7955415072367327\n",
      "Sensitivity: 0.7950245269796776\n",
      "Specificity: 0.8052805280528053\n",
      "Testing result: AUROC: 0.8565241072110014, F1: 0.3031470777135517, Testing loss is: 0.4698520302772522\n",
      "Ending Time: 1620637118.1968617\n",
      "Duration is: 278.8577449321747\n"
     ]
    }
   ],
   "source": [
    "!CUDA_VISIBLE_DEVICES='6' python train_cls.py --epochs 10"
   ]
  },
  {
   "source": [
    "## DTI回归任务"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "在MolTrans原论文中，只包含了分类任务。不过，我们还在回归任务上进行了实验。在此教程中，我们使用与GraphDTA和DGraphDTA相同切分方式的DAVIS数据集为例。在真实场景中，药物和靶点蛋白的相互作用及亲和度通常用多指标进行衡量，例如：Kd、IC50、Ki等，并且给出药物和靶点蛋白的亲和度打分相较于二分类任务更加合理。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 数据预处理"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "读取DAVIS benchmark数据集并进行处理。关于`load_davis_dataset`的具体实现，请参阅脚本`util_function.py`。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "import paddle\n",
    "from helper import utils\n",
    "from preprocess import DataEncoder\n",
    "from util_function import load_davis_dataset\n",
    "\n",
    "trainset, testset = load_davis_dataset()\n",
    "trainset_smiles = [d['smiles'] for d in trainset]\n",
    "trainset_protein = [d['protein'] for d in trainset]\n",
    "trainset_aff = [d['aff'] for d in trainset]\n",
    "\n",
    "testset_smiles = [d['smiles'] for d in testset]\n",
    "testset_protein = [d['protein'] for d in testset]\n",
    "testset_aff = [d['aff'] for d in testset]\n",
    "\n",
    "df_data_t = pd.DataFrame(zip(trainset_smiles, trainset_protein, trainset_aff))\n",
    "df_data_t.rename(columns={0:'SMILES', 1: 'Target Sequence', 2: 'Label'}, inplace=True)\n",
    "df_data_tt = pd.DataFrame(zip(testset_smiles, testset_protein, testset_aff))\n",
    "df_data_tt.rename(columns={0:'SMILES', 1: 'Target Sequence', 2: 'Label'}, inplace=True)\n",
    "\n",
    "reg_training_data = DataEncoder(df_data_t.index.values, df_data_t.Label.values, df_data_t)\n",
    "reg_train_loader = utils.BaseDataLoader(reg_training_data, batch_size=64, \n",
    "                                    shuffle=True, drop_last=False, num_workers=args.workers)\n",
    "reg_validation_data = DataEncoder(df_data_tt.index.values, df_data_tt.Label.values, df_data_tt)\n",
    "reg_validation_loader = utils.BaseDataLoader(reg_validation_data, batch_size=64, \n",
    "                                    shuffle=False, drop_last=False, num_workers=args.workers)"
   ]
  },
  {
   "source": [
    "### 训练和验证"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "**基本设置**。为了获得更好实验效果，建议`max_epoch`至少设置为**200**。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "import paddle\n",
    "from paddle import nn\n",
    "\n",
    "# 基本设置\n",
    "optimal_mse = 10000\n",
    "optimal_CI = 0\n",
    "log_iter = 50\n",
    "log_step = 0\n",
    "max_epoch = 10\n",
    "\n",
    "# 设置损失函数\n",
    "reg_loss_fn = paddle.nn.MSELoss()"
   ]
  },
  {
   "source": [
    "**模型训练**。通过读取`reg_train_loader`中的数据，模型按batch进行训练，MSELoss用于DTI回归任务。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练\n",
    "for epoch in range(max_epoch):\n",
    "    print(\"=====Go for Training=====\")\n",
    "    model.train()        \n",
    "    # 回归任务\n",
    "    for batch_id, data in enumerate(reg_train_loader):\n",
    "        d_out, mask_d_out, t_out, mask_t_out, label = data\n",
    "        temp = model(d_out.long().cuda(), t_out.long().cuda(), mask_d_out.long().cuda(), mask_t_out.long().cuda())\n",
    "        label = paddle.cast(label, \"float32\")\n",
    "        predicts = paddle.squeeze(temp)\n",
    "        loss = reg_loss_fn(predicts, label)\n",
    "\n",
    "        optim.clear_grad()\n",
    "        loss.backward()\n",
    "        optim.step()\n",
    "\n",
    "        if batch_id % log_iter == 0:\n",
    "            print(\"Training at epoch: {}, step: {}, loss is: {}\".format(epoch, batch_id, loss.cpu().detach().numpy()))\n",
    "            log_step += 1"
   ]
  },
  {
   "source": [
    "**回归任务的验证函数**。我们用MSE和CI等指标进行验证。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "from preprocess import concordance_index1\n",
    "\n",
    "# 验证函数\n",
    "def reg_test(data_generator, model):\n",
    "    \"\"\"\n",
    "    Test for regression task\n",
    "    \"\"\"\n",
    "    y_pred = []\n",
    "    y_label = []\n",
    "\n",
    "    model.eval()    \n",
    "    for _, data in enumerate(data_generator):\n",
    "        d_out, mask_d_out, t_out, mask_t_out, label = data\n",
    "        temp = model(d_out.long().cuda(), t_out.long().cuda(), mask_d_out.long().cuda(), mask_t_out.long().cuda())\n",
    "\n",
    "        label = paddle.cast(label, \"float32\")\n",
    "        predicts = paddle.squeeze(temp, axis=1)\n",
    "\n",
    "        loss = reg_loss_fn(predicts, label)\n",
    "        predict_id = paddle.squeeze(temp).detach().cpu().numpy()\n",
    "        label_id = label.to('cpu').numpy()\n",
    "\n",
    "        y_label = y_label + label_id.flatten().tolist()\n",
    "        y_pred = y_pred + predict_id.flatten().tolist()\n",
    "\n",
    "        total_label = np.array(y_label)\n",
    "        total_pred = np.array(y_pred)\n",
    "\n",
    "        mse = ((total_label - total_pred) ** 2).mean(axis=0)\n",
    "    return (mse, concordance_index1(np.array(y_label), np.array(y_pred)), loss.item())"
   ]
  },
  {
   "source": [
    "**模型验证**。CI和MSE指标用来评定模型效果，这里我们用更好的CI或MSE指标来分别选择最佳模型。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 验证\n",
    "print(\"=====Go for Validation=====\")\n",
    "with paddle.no_grad():\n",
    "    mse, CI, reg_loss = reg_test(reg_validation_loader, model)\n",
    "    print(\"Validation at epoch: {}, MSE: {}, CI: {}, loss is: {}\".format(epoch, mse, CI, reg_loss))\n",
    "        \n",
    "    # 保存最佳模型\n",
    "    if mse < optimal_mse:\n",
    "        optimal_mse = mse\n",
    "        print(\"Saving the best_model with best MSE...\")\n",
    "        print(\"Best MSE: {}\".format(optimal_mse))\n",
    "        paddle.save(model.state_dict(), 'DAVIS_bestMSE_model_reg1')\n",
    "    if CI > optimal_CI:\n",
    "        optimal_CI = CI\n",
    "        print(\"Saving the best_model with best CI...\")\n",
    "        print(\"Best CI: {}\".format(optimal_CI))\n",
    "        paddle.save(model.state_dict(), 'DAVIS_bestCI_model_reg1')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Starting Time: 1620644443.8780835\n",
      "W0510 19:00:43.879191 15912 device_context.cc:320] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.2, Runtime API Version: 10.1\n",
      "W0510 19:00:43.883350 15912 device_context.cc:330] device: 0, cuDNN Version: 7.6.\n",
      "=====Go for Initial Testing=====\n",
      "Testing result: MSE: 36.81499150313501, CI: 0.4923231465886854\n",
      "=====Go for Training=====\n",
      "Training at epoch: 0, step: 0, loss is: [36.40392]\n",
      "Training at epoch: 0, step: 50, loss is: [0.7971545]\n",
      "Training at epoch: 0, step: 100, loss is: [1.5158722]\n",
      "Training at epoch: 0, step: 150, loss is: [0.59505415]\n",
      "Training at epoch: 0, step: 200, loss is: [0.6165484]\n",
      "Training at epoch: 0, step: 250, loss is: [1.2195725]\n",
      "Training at epoch: 0, step: 300, loss is: [0.85159874]\n",
      "Training at epoch: 0, step: 350, loss is: [0.6839756]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 0, MSE: 0.8015637226389766, CI: 0.6708589398191237, loss is: 0.8781716227531433\n",
      "Saving the best_model with best MSE...\n",
      "Best MSE: 0.8015637226389766\n",
      "Saving the best_model with best CI...\n",
      "Best CI: 0.6708589398191237\n",
      "=====Go for Training=====\n",
      "Training at epoch: 1, step: 0, loss is: [0.77601284]\n",
      "Training at epoch: 1, step: 50, loss is: [0.38790193]\n",
      "Training at epoch: 1, step: 100, loss is: [0.4093484]\n",
      "Training at epoch: 1, step: 150, loss is: [1.0540824]\n",
      "Training at epoch: 1, step: 200, loss is: [1.1064628]\n",
      "Training at epoch: 1, step: 250, loss is: [0.7827107]\n",
      "Training at epoch: 1, step: 300, loss is: [0.6658678]\n",
      "Training at epoch: 1, step: 350, loss is: [0.83563316]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 1, MSE: 0.7485169280561876, CI: 0.7480033871241194, loss is: 0.8059883713722229\n",
      "Saving the best_model with best MSE...\n",
      "Best MSE: 0.7485169280561876\n",
      "Saving the best_model with best CI...\n",
      "Best CI: 0.7480033871241194\n",
      "=====Go for Training=====\n",
      "Training at epoch: 2, step: 0, loss is: [0.37989652]\n",
      "Training at epoch: 2, step: 50, loss is: [0.7701365]\n",
      "Training at epoch: 2, step: 100, loss is: [0.41347465]\n",
      "Training at epoch: 2, step: 150, loss is: [0.2531948]\n",
      "Training at epoch: 2, step: 200, loss is: [0.20149264]\n",
      "Training at epoch: 2, step: 250, loss is: [0.44797474]\n",
      "Training at epoch: 2, step: 300, loss is: [0.64271253]\n",
      "Training at epoch: 2, step: 350, loss is: [0.594674]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 2, MSE: 0.5130963456329065, CI: 0.7951113059893562, loss is: 0.6232266426086426\n",
      "Saving the best_model with best MSE...\n",
      "Best MSE: 0.5130963456329065\n",
      "Saving the best_model with best CI...\n",
      "Best CI: 0.7951113059893562\n",
      "=====Go for Training=====\n",
      "Training at epoch: 3, step: 0, loss is: [0.6624823]\n",
      "Training at epoch: 3, step: 50, loss is: [0.36749476]\n",
      "Training at epoch: 3, step: 100, loss is: [0.48208517]\n",
      "Training at epoch: 3, step: 150, loss is: [0.44298726]\n",
      "Training at epoch: 3, step: 200, loss is: [0.51683795]\n",
      "Training at epoch: 3, step: 250, loss is: [0.9160007]\n",
      "Training at epoch: 3, step: 300, loss is: [0.42881072]\n",
      "Training at epoch: 3, step: 350, loss is: [0.48415303]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 3, MSE: 0.5159317524793656, CI: 0.7910200625383552, loss is: 0.6242136359214783\n",
      "=====Go for Training=====\n",
      "Training at epoch: 4, step: 0, loss is: [0.6200688]\n",
      "Training at epoch: 4, step: 50, loss is: [0.57157224]\n",
      "Training at epoch: 4, step: 100, loss is: [0.24151284]\n",
      "Training at epoch: 4, step: 150, loss is: [0.49442887]\n",
      "Training at epoch: 4, step: 200, loss is: [0.32462287]\n",
      "Training at epoch: 4, step: 250, loss is: [0.4290502]\n",
      "Training at epoch: 4, step: 300, loss is: [0.44636822]\n",
      "Training at epoch: 4, step: 350, loss is: [0.5914639]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 4, MSE: 0.5030643339108036, CI: 0.7901386690084728, loss is: 0.49283361434936523\n",
      "Saving the best_model with best MSE...\n",
      "Best MSE: 0.5030643339108036\n",
      "=====Go for Training=====\n",
      "Training at epoch: 5, step: 0, loss is: [0.6327672]\n",
      "Training at epoch: 5, step: 50, loss is: [0.20113182]\n",
      "Training at epoch: 5, step: 100, loss is: [0.34514356]\n",
      "Training at epoch: 5, step: 150, loss is: [0.51452327]\n",
      "Training at epoch: 5, step: 200, loss is: [0.4211307]\n",
      "Training at epoch: 5, step: 250, loss is: [0.26193377]\n",
      "Training at epoch: 5, step: 300, loss is: [0.29113644]\n",
      "Training at epoch: 5, step: 350, loss is: [0.32035032]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 5, MSE: 0.5033568819927414, CI: 0.8032911892962222, loss is: 0.49914076924324036\n",
      "Saving the best_model with best CI...\n",
      "Best CI: 0.8032911892962222\n",
      "=====Go for Training=====\n",
      "Training at epoch: 6, step: 0, loss is: [0.8875035]\n",
      "Training at epoch: 6, step: 50, loss is: [0.31565398]\n",
      "Training at epoch: 6, step: 100, loss is: [0.60160255]\n",
      "Training at epoch: 6, step: 150, loss is: [0.6101297]\n",
      "Training at epoch: 6, step: 200, loss is: [0.5152693]\n",
      "Training at epoch: 6, step: 250, loss is: [0.29085428]\n",
      "Training at epoch: 6, step: 300, loss is: [0.26402837]\n",
      "Training at epoch: 6, step: 350, loss is: [0.42066595]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 6, MSE: 0.4777158285347775, CI: 0.801435744880145, loss is: 0.509095311164856\n",
      "Saving the best_model with best MSE...\n",
      "Best MSE: 0.4777158285347775\n",
      "=====Go for Training=====\n",
      "Training at epoch: 7, step: 0, loss is: [0.50502145]\n",
      "Training at epoch: 7, step: 50, loss is: [0.434888]\n",
      "Training at epoch: 7, step: 100, loss is: [0.3100025]\n",
      "Training at epoch: 7, step: 150, loss is: [0.66013527]\n",
      "Training at epoch: 7, step: 200, loss is: [0.58618945]\n",
      "Training at epoch: 7, step: 250, loss is: [0.63922274]\n",
      "Training at epoch: 7, step: 300, loss is: [0.7331528]\n",
      "Training at epoch: 7, step: 350, loss is: [0.52831435]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 7, MSE: 0.4821291413520435, CI: 0.803972795187576, loss is: 0.46738845109939575\n",
      "Saving the best_model with best CI...\n",
      "Best CI: 0.803972795187576\n",
      "=====Go for Training=====\n",
      "Training at epoch: 8, step: 0, loss is: [0.30136567]\n",
      "Training at epoch: 8, step: 50, loss is: [0.27696216]\n",
      "Training at epoch: 8, step: 100, loss is: [0.42921004]\n",
      "Training at epoch: 8, step: 150, loss is: [0.61006665]\n",
      "Training at epoch: 8, step: 200, loss is: [0.34988898]\n",
      "Training at epoch: 8, step: 250, loss is: [0.5139885]\n",
      "Training at epoch: 8, step: 300, loss is: [0.4345175]\n",
      "Training at epoch: 8, step: 350, loss is: [0.21153398]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 8, MSE: 0.47525263096929343, CI: 0.785754138223028, loss is: 0.47755154967308044\n",
      "Saving the best_model with best MSE...\n",
      "Best MSE: 0.47525263096929343\n",
      "=====Go for Training=====\n",
      "Training at epoch: 9, step: 0, loss is: [0.6673766]\n",
      "Training at epoch: 9, step: 50, loss is: [0.3438828]\n",
      "Training at epoch: 9, step: 100, loss is: [0.22004753]\n",
      "Training at epoch: 9, step: 150, loss is: [0.27555895]\n",
      "Training at epoch: 9, step: 200, loss is: [0.2196229]\n",
      "Training at epoch: 9, step: 250, loss is: [0.5212375]\n",
      "Training at epoch: 9, step: 300, loss is: [0.38815624]\n",
      "Training at epoch: 9, step: 350, loss is: [0.43532062]\n",
      "=====Go for Validation=====\n",
      "Validation at epoch: 9, MSE: 0.4712291920781906, CI: 0.7921701624015439, loss is: 0.5183619856834412\n",
      "Saving the best_model with best MSE...\n",
      "Best MSE: 0.4712291920781906\n",
      "Best MSE: 0.4712291920781906\n",
      "Best CI: 0.803972795187576\n",
      "Ending Time: 1620646551.8915026\n",
      "Duration is: 2108.013419151306\n"
     ]
    }
   ],
   "source": [
    "!CUDA_VISIBLE_DEVICES='5' python train_reg.py --epochs 10"
   ]
  },
  {
   "source": [
    "除了以上展示的范例外，你还可以尝试其他的药物靶点蛋白相互作用数据集，详见`apps/drug_target_interaction/moltrans_dti/dataset`目录。关于MolTrans模型的具体实现和更多细节，请参阅代码。如果你有任何疑问，请在我们PaddleHelix官方GitHub上提交issue，我们将及时为你解答！"
   ],
   "cell_type": "markdown",
   "metadata": {}
  }
 ],
 "metadata": {
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3.8.8 64-bit",
   "metadata": {
    "interpreter": {
     "hash": "2e7c3e562fc896c32281f71516f95a71e3c3a5730a682e6cb3b265be12605d53"
    }
   }
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}