{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# DistMult 实践\n",
    "\n",
    "在这个演示中，我们使用DistMUlt([论文链接](http://proceedings.mlr.press/v48/trouillon16.pdf))对示例中文知识图谱进行链接预测，从而达到补全知识图谱的目的。\n",
    "\n",
    "希望在这个demo中帮助大家了解知识图谱表示学习的作用原理和机制。\n",
    "\n",
    "本demo建议使用python3运行。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据集\n",
    "这个示例中，我们使用的是表示学习模型做知识图谱链接预测常用的benchmark数据集 FB15k-237：\n",
    "\n",
    "| #Ent | #Rel | # Train | #Test | #Valid |\n",
    "| --- | --- | --- | --- | --- |\n",
    "| 14,541| 237| 272,115| 17,535| 20,466 |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### DistMult 原理回顾\n",
    "DistMult将每个实体表示为一个向量，每个关系都表示成一个矩阵，\n",
    "并假设对于一个存在在知识图谱中的三元组$(h,r,t)$, \n",
    "$h, r, t$的向量表示$\\mathbf{h}, \\mathbf{M}_r, \\mathbf{t}$满足：\n",
    "$$\\mathbf{h}\\mathbf{M}_r = \\mathbf{t}$$\n",
    "\n",
    "\n",
    "对于每个正确的三元组的优化目标是：\n",
    "$$\\mathbf{h} \\mathbf{M}_r \\approx \\mathbf{t}$$\n",
    "DistMult采用点积来衡量两个向量的相似度，所以对于一个三元组的评分函数为：\n",
    "$$f_r(h,t) = \\mathbf{h}\\mathbf{M}_r \\mathbf{t} $$\n",
    "对于正样本评分较高，负样本评分较低\n",
    "\n",
    "DistMult的损失函数：\n",
    "$$ L = \\sum_{(h,r,t)\\in S} \\sum_{(h^\\prime, r^\\prime, t^\\prime) \\in S^\\prime} max(0, f_{r^\\prime} (h^\\prime, t^\\prime)  + \\gamma - f_r(h,t)) $$\n",
    "其中$S$是所有正样本的集合，$S^\\prime$是所有负样本的集合，对于一个正样本$(h,r,t)$负样本通过随机替换$h$或$t$得到， $\\gamma$表示间隔，是一个超参。\n",
    "\n",
    "根据DistMult原文的实验结果，当关系的表示设为对角阵时链接预测效果较好，本示例也采取了对角阵的表示方式，示例中采用了如下loss：\n",
    "$$ L = \\frac{1}{len(S)}\\sum_{(h,r,t)\\in \\{S, S^\\prime\\}}log(e^{(- f_r(h,t)*lable)} + 1)$$\n",
    "其中正样本的lable为1，负样本的lable为-1.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 代码实践"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/zhangwen/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
      "  from ._conv import register_converters as _register_converters\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf \n",
    "import time \n",
    "import argparse\n",
    "import random\n",
    "import numpy as np \n",
    "import os.path\n",
    "import math\n",
    "import timeit\n",
    "from multiprocessing import JoinableQueue, Queue, Process\n",
    "from collections import defaultdict"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "class DistMult:\n",
    "    @property\n",
    "    def variables(self):\n",
    "        return self.__variables\n",
    "\n",
    "    @property\n",
    "    def num_triple_train(self):\n",
    "        return self.__num_triple_train\n",
    "\n",
    "    @property \n",
    "    def num_triple_test(self):\n",
    "        return self.__num_triple_test\n",
    "\n",
    "    @property\n",
    "    def testing_data(self):\n",
    "        return self.__triple_test\n",
    "\n",
    "    @property \n",
    "    def num_entity(self):\n",
    "        return self.__num_entity\n",
    "\n",
    "    @property\n",
    "    def embedding_entity(self):\n",
    "        return self.__embedding_entity\n",
    "\n",
    "\n",
    "    @property\n",
    "    def embedding_relation(self):\n",
    "        return self.__embedding_relation\n",
    "\n",
    "    @property\n",
    "    def hr_t(self):\n",
    "        return self.__hr_t\n",
    "\n",
    "    @property \n",
    "    def tr_h(self):\n",
    "        return self.__tr_h\n",
    "    \n",
    "    @property\n",
    "    def entity2id(self):\n",
    "        return self.__entity2id\n",
    "    \n",
    "    @property\n",
    "    def relation2id(self):\n",
    "        return self.__relation2id\n",
    "\n",
    "    @property\n",
    "    def id2entity(self):\n",
    "        return self.__id2entity\n",
    "    \n",
    "    @property\n",
    "    def id2relation(self):\n",
    "        return self.__id2relation\n",
    "\n",
    "    def training_data_batch(self, batch_size = 512):\n",
    "        n_triple = len(self.__triple_train)\n",
    "        rand_idx = np.random.permutation(n_triple)\n",
    "        start = 0\n",
    "        while start < n_triple:\n",
    "            start_t = timeit.default_timer()\n",
    "            end = min(start+batch_size, n_triple)\n",
    "            size = end - start \n",
    "            train_triple_positive = np.asarray([ self.__triple_train[x] for x in  rand_idx[start:end]])\n",
    "\n",
    "            num_neg = 1\n",
    "            train_negative1 = np.repeat(train_triple_positive, num_neg, axis=0)\n",
    "            train_negative2 = np.repeat(train_triple_positive, num_neg, axis=0)\n",
    "            train_negative1[:, 0] = np.random.randint(self.__num_entity, size=num_neg*size)\n",
    "            train_negative2[:, 2] = np.random.randint(self.__num_entity, size=num_neg*size)\n",
    "            train_triple_negative = np.concatenate((train_negative1, train_negative2), axis=0)\n",
    "\n",
    "            start = end\n",
    "            prepare_t = timeit.default_timer()-start_t\n",
    "\n",
    "            yield train_triple_positive, train_triple_negative, prepare_t\n",
    "\n",
    "\n",
    "    def __init__(self, data_dir, negative_sampling,learning_rate, \n",
    "             batch_size, max_iter, margin, dimension, norm, evaluation_size, regularizer_weight):\n",
    "        # this part for data prepare\n",
    "        self.__data_dir=data_dir\n",
    "        self.__negative_sampling=negative_sampling\n",
    "        self.__regularizer_weight = regularizer_weight\n",
    "        self.__norm = norm\n",
    "\n",
    "        self.__entity2id={}\n",
    "        self.__id2entity={}\n",
    "        self.__relation2id={}\n",
    "        self.__id2relation={}\n",
    "\n",
    "        self.__triple_train=[] #[(head_id, relation_id, tail_id),...]\n",
    "        self.__triple_test=[]\n",
    "        self.__triple_valid=[]\n",
    "        self.__triple = []\n",
    "\n",
    "        self.__num_entity=0\n",
    "        self.__num_relation=0\n",
    "        self.__num_triple_train=0\n",
    "        self.__num_triple_test=0\n",
    "        self.__num_triple_valid=0\n",
    "\n",
    "        # load all the file: entity2id.txt, relation2id.txt, train.txt, test.txt, valid.txt\n",
    "        self.load_data()\n",
    "        print('finish preparing data. ')\n",
    "\n",
    "\n",
    "        # this part for the model:\n",
    "        self.__learning_rate = learning_rate\n",
    "        self.__batch_size = batch_size\n",
    "        self.__max_iter = max_iter\n",
    "        self.__margin = margin\n",
    "        self.__dimension = dimension\n",
    "        self.__variables= []\n",
    "        #self.__norm = norm\n",
    "        self.__evaluation_size = evaluation_size\n",
    "        bound = 6 / math.sqrt(self.__dimension)\n",
    "        with tf.device('/cpu'):\n",
    "            self.__embedding_entity = tf.get_variable('embedding_entity', [self.__num_entity, self.__dimension],\n",
    "                                                       initializer=tf.random_uniform_initializer(minval=-bound, maxval=bound, seed = 123), dtype=tf.float32)\n",
    "            self.__embedding_relation = tf.get_variable('embedding_relation', [self.__num_relation, self.__dimension],\n",
    "                                                         initializer=tf.random_uniform_initializer(minval=-bound, maxval=bound, seed =124), dtype=tf.float32)\n",
    "            self.__variables.append(self.__embedding_entity)\n",
    "            self.__variables.append(self.__embedding_relation)\n",
    "            print('finishing initializing')\n",
    "\n",
    "\n",
    "    def load_data(self):\n",
    "        print('loading entity2id.txt ...')\n",
    "        with open(os.path.join(self.__data_dir, 'entity2id.txt'), encoding='utf-8') as f:\n",
    "            self.__entity2id = {line.strip().split('\\t')[0]: int(line.strip().split('\\t')[1]) for line in f.readlines()}\n",
    "            self.__id2entity = {value:key for key,value in self.__entity2id.items()}\n",
    "\n",
    "        print('loading reltion2id.txt ...')     \n",
    "        with open(os.path.join(self.__data_dir,'relation2id.txt'), encoding='utf-8') as f:\n",
    "            self.__relation2id = {line.strip().split('\\t')[0]: int(line.strip().split('\\t')[1]) for line in f.readlines()}\n",
    "            self.__id2relation = {value:key for key, value in self.__relation2id.items()}\n",
    "\n",
    "        def load_triple(self, triplefile):\n",
    "            triple_list = [] #[(head_id, relation_id, tail_id),...]\n",
    "            with open(os.path.join(self.__data_dir, triplefile), encoding='utf-8') as f:\n",
    "                for line in f.readlines():\n",
    "                    line_list = line.strip().split('\\t')\n",
    "                    assert len(line_list) == 3\n",
    "                    headid = self.__entity2id[line_list[0]]\n",
    "                    relationid = self.__relation2id[line_list[1]]\n",
    "                    tailid = self.__entity2id[line_list[2]]\n",
    "                    triple_list.append((headid, relationid, tailid))\n",
    "                    self.__hr_t[(headid, relationid)].add(tailid)\n",
    "                    self.__tr_h[(tailid, relationid)].add(headid)\n",
    "            return triple_list\n",
    "\n",
    "        self.__hr_t = defaultdict(set)\n",
    "        self.__tr_h = defaultdict(set)\n",
    "        self.__triple_train = load_triple(self, 'train.txt')\n",
    "        self.__triple_test = load_triple(self, 'test.txt')\n",
    "        self.__triple_valid = load_triple(self, 'valid.txt')\n",
    "        self.__triple = np.concatenate([self.__triple_train, self.__triple_test, self.__triple_valid], axis = 0 )\n",
    "\n",
    "        self.__num_relation = len(self.__relation2id)\n",
    "        self.__num_entity = len(self.__entity2id)\n",
    "        self.__num_triple_train = len(self.__triple_train)\n",
    "        self.__num_triple_test = len(self.__triple_test)\n",
    "        self.__num_triple_valid = len(self.__triple_valid)\n",
    "\n",
    "        print('entity number: ' + str(self.__num_entity))\n",
    "        print('relation number: ' + str(self.__num_relation))\n",
    "        print('training triple number: ' + str(self.__num_triple_train))\n",
    "        print('testing triple number: ' + str(self.__num_triple_test))\n",
    "        print('valid triple number: ' + str(self.__num_triple_valid))\n",
    "\n",
    "\n",
    "        if self.__negative_sampling == 'bern':\n",
    "            self.__relation_property_head = {x:[] for x in range(self.__num_relation)} #{relation_id:[headid1, headid2,...]}\n",
    "            self.__relation_property_tail = {x:[] for x in range(self.__num_relation)} #{relation_id:[tailid1, tailid2,...]}\n",
    "            self.__relation_property = {x:[] for x in range(self.__num_relation)} \n",
    "            for t in self.__triple_train:\n",
    "                #print(t)\n",
    "                self.__relation_property_head[t[1]].append(t[0])\n",
    "                self.__relation_property_tail[t[1]].append(t[2])\n",
    "            #print(self.__relation_property_head[0])\n",
    "            #print(self.__relation_property_tail[0])\n",
    "            for x in self.__relation_property_head.keys():\n",
    "                t = len(set(self.__relation_property_tail[x]))\n",
    "                h = len(set(self.__relation_property_head[x]))\n",
    "                self.__relation_property[x] = float(t)/(h+t+0.000000001)\n",
    "            #self.__relation_property = {x:(len(set(self.__relation_property_tail[x])))/(len(set(self.__relation_property_head[x]))+ len(set(self.__relation_property_tail[x]))) \\\n",
    "            #\t\t\t\t\t\t\t for x in self.__relation_property_head.keys()} # {relation_id: p, ...} 0< num <1, and for relation replace head entity with the property p\n",
    "        else: \n",
    "            print(\"unif set don't need to calculate hpt and tph\")\n",
    "\n",
    "\n",
    "\n",
    "    def train(self, inputs):\n",
    "        embedding_relation = self.__embedding_relation\n",
    "        embedding_entity = self.__embedding_entity\n",
    "\n",
    "        triple_positive, triple_negative = inputs # triple_positive:(head_id,relation_id,tail_id)\n",
    "\n",
    "        #norm_entity = tf.nn.l2_normalize(embedding_entity, dim = 1)\n",
    "        #norm_relation = tf.nn.l2_normalize(embedding_relation, dim = 1)\n",
    "        norm_entity = embedding_entity\n",
    "        norm_relation = embedding_relation\n",
    "        norm_entity_l2sum = tf.sqrt(tf.reduce_sum(norm_entity**2, axis = 1))\n",
    "\n",
    "        embedding_positive_head = tf.nn.embedding_lookup(norm_entity, triple_positive[:, 0])\n",
    "        embedding_positive_tail = tf.nn.embedding_lookup(norm_entity, triple_positive[:, 2])\n",
    "        embedding_positive_relation = tf.nn.embedding_lookup(norm_relation, triple_positive[:, 1])\n",
    "\n",
    "        embedding_negative_head = tf.nn.embedding_lookup(norm_entity, triple_negative[:, 0])\n",
    "        embedding_negative_tail = tf.nn.embedding_lookup(norm_entity, triple_negative[:, 2])\n",
    "        embedding_negative_relation = tf.nn.embedding_lookup(norm_relation, triple_negative[:, 1])\n",
    "\n",
    "        score_positive = tf.reduce_sum(embedding_positive_head * embedding_positive_relation * embedding_positive_tail, axis = 1)\n",
    "        score_negative = tf.reduce_sum(embedding_negative_head * embedding_negative_relation * embedding_negative_tail, axis = 1)\n",
    "        score = tf.concat((-score_positive, score_negative), axis =0)\n",
    "        \n",
    "        loss_triple = tf.reduce_mean(tf.nn.softplus(score))\n",
    "        \n",
    "        self.__loss_regularizer = loss_regularizer = tf.reduce_sum(tf.abs(self.__embedding_relation)) + tf.reduce_sum(tf.abs(self.__embedding_entity))\n",
    "        return loss_triple + loss_regularizer*self.__regularizer_weight,  norm_entity_l2sum\n",
    "\n",
    "    def test(self, inputs):\n",
    "        embedding_relation = self.__embedding_relation\n",
    "        embedding_entity = self.__embedding_entity\n",
    "\n",
    "        triple_test = inputs # (headid, tailid, tailid)\n",
    "        head_vec = tf.nn.embedding_lookup(embedding_entity, triple_test[0])\n",
    "        rel_vec = tf.nn.embedding_lookup(embedding_relation, triple_test[1])\n",
    "        tail_vec = tf.nn.embedding_lookup(embedding_entity, triple_test[2])\n",
    "\n",
    "        norm_embedding_entity = tf.nn.l2_normalize(embedding_entity, dim =1 )\n",
    "        norm_embedding_relation = tf.nn.l2_normalize(embedding_relation, dim = 1)\n",
    "        norm_head_vec = tf.nn.embedding_lookup(norm_embedding_entity, triple_test[0])\n",
    "        norm_rel_vec = tf.nn.embedding_lookup(norm_embedding_relation, triple_test[1])\n",
    "        norm_tail_vec = tf.nn.embedding_lookup(norm_embedding_entity, triple_test[2])\n",
    "        \n",
    "        _, id_replace_head = tf.nn.top_k(tf.reduce_sum(embedding_entity * rel_vec * tail_vec, axis=1), k=self.__num_entity)\n",
    "        _, id_replace_tail = tf.nn.top_k(tf.reduce_sum(head_vec * rel_vec * embedding_entity, axis=1), k=self.__num_entity)\n",
    "        \n",
    "        _, norm_id_replace_head = tf.nn.top_k(tf.reduce_sum(norm_embedding_entity * norm_rel_vec * norm_tail_vec, axis=1), k=self.__num_entity)\n",
    "        _, norm_id_replace_tail = tf.nn.top_k(tf.reduce_sum(norm_head_vec * norm_rel_vec * norm_embedding_entity, axis=1), k=self.__num_entity)\n",
    "\n",
    "        return id_replace_head, id_replace_tail, norm_id_replace_head, norm_id_replace_tail"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_operation(model, learning_rate=0.01, margin=1.0, optimizer_str = 'gradient'):\n",
    "    with tf.device('/cpu'):\n",
    "        train_triple_positive_input = tf.placeholder(tf.int32, [None, 3])\n",
    "        train_triple_negative_input = tf.placeholder(tf.int32, [None, 3])\n",
    "\n",
    "        loss, norm_entity = model.train([train_triple_positive_input, train_triple_negative_input])\n",
    "        if optimizer_str == 'gradient':\n",
    "            optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate)\n",
    "        elif optimizer_str == 'rms':\n",
    "            optimizer = tf.train.RMSPropOptimizer(learning_rate = learning_rate)\n",
    "        elif optimizer_str == 'adam':\n",
    "            optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)\n",
    "        else:\n",
    "            raise NotImplementedError(\"Dose not support %s optimizer\" %optimizer_str)\n",
    "\n",
    "        grads = optimizer.compute_gradients(loss, model.variables)\n",
    "        op_train = optimizer.apply_gradients(grads)\n",
    "\n",
    "        return train_triple_positive_input, train_triple_negative_input, loss, op_train, norm_entity"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def test_operation(model):\n",
    "    with tf.device('/cpu'):\n",
    "        test_triple = tf.placeholder(tf.int32, [3])\n",
    "        head_rank, tail_rank, norm_head_rank, norm_tail_rank = model.test(test_triple)\n",
    "        return test_triple, head_rank, tail_rank, norm_head_rank, norm_tail_rank"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Args:\n",
    "    pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 测试一个样本函数\n",
    "def test_one_sample(model, trp, session):\n",
    "    t = trp\n",
    "    id_replace_head , id_replace_tail, norm_id_replace_head , norm_id_replace_tail  = session.run([head_rank, tail_rank, norm_head_rank, norm_tail_rank], {test_triple:t})\n",
    "    hr_t = model.hr_t\n",
    "    tr_h = model.tr_h\n",
    "    \n",
    "    hrank = 0\n",
    "    fhrank = 0\n",
    "    predicted_head_tmp = []\n",
    "    for i in range(len(id_replace_head)):\n",
    "        val = id_replace_head[i]\n",
    "        predicted_head_tmp.append(val)\n",
    "        if val == t[0]:\n",
    "            break\n",
    "        else: \n",
    "            hrank += 1\n",
    "            fhrank += 1 \n",
    "            if val in tr_h[(t[2],t[1])]:\n",
    "                fhrank -= 1\n",
    "                _ = predicted_head_tmp.pop()\n",
    "    predicted_head_tmp = [id_replace_head[i] for i in range(len(id_replace_head))]\n",
    "    \n",
    "    norm_hrank = 0\n",
    "    norm_fhrank = 0\n",
    "    norm_predicted_head_tmp = []\n",
    "    for i in range(len(norm_id_replace_head)):\n",
    "        val = norm_id_replace_head[i]\n",
    "        norm_predicted_head_tmp.append(val)\n",
    "        if val == t[0]:\n",
    "            break\n",
    "        else: \n",
    "            norm_hrank += 1\n",
    "            norm_fhrank += 1 \n",
    "            if val in tr_h[(t[2],t[1])]:\n",
    "                norm_fhrank -= 1\n",
    "                _ = norm_predicted_head_tmp.pop()\n",
    "    norm_predicted_head_tmp = [id_replace_head[i] for i in range(len(norm_id_replace_head))]\n",
    "\n",
    "    trank = 0\n",
    "    ftrank = 0\n",
    "    predicted_tail_tmp = []\n",
    "    for i in range(len(id_replace_tail)):\n",
    "        val = id_replace_tail[i]\n",
    "        predicted_tail_tmp.append(val)\n",
    "        if val == t[2]:\n",
    "            break\n",
    "        else:\n",
    "            trank += 1\n",
    "            ftrank += 1\n",
    "            if val in hr_t[(t[0], t[1])]:\n",
    "                ftrank -= 1\n",
    "                _ = predicted_tail_tmp.pop()\n",
    "    predicted_tail_tmp = [id_replace_tail[i] for i in range(len(id_replace_tail))]\n",
    "\n",
    "    norm_trank = 0\n",
    "    norm_ftrank = 0\n",
    "    norm_predicted_tail_tmp = []\n",
    "    for i in range(len(norm_id_replace_tail)):\n",
    "        val = norm_id_replace_tail[i]\n",
    "        norm_predicted_tail_tmp.append(val)\n",
    "        if val == t[2]:\n",
    "            break\n",
    "        else:\n",
    "            norm_trank += 1\n",
    "            norm_ftrank += 1\n",
    "            if val in hr_t[(t[0], t[1])]:\n",
    "                norm_ftrank -= 1\n",
    "                _ = norm_predicted_tail_tmp.pop()\n",
    "    norm_predicted_tail_tmp = [id_replace_tail[i] for i in range(len(norm_id_replace_tail))]\n",
    "    \n",
    "    return hrank+1, fhrank+1, trank+1, ftrank+1, norm_hrank+1, norm_fhrank+1, norm_trank+1, norm_ftrank+1, \\\n",
    "            predicted_head_tmp, predicted_tail_tmp, norm_predicted_head_tmp, norm_predicted_tail_tmp"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def hit(rank_head, rank_tail, k):\n",
    "    n_test = len(rank_head)\n",
    "    assert len(rank_head) == len(rank_tail)\n",
    "    hit_head = np.sum(np.asarray(np.asarray(rank_head)<=k , dtype=np.float32))/n_test\n",
    "    hit_tail = np.sum(np.asarray(np.asarray(rank_tail)<=k , dtype=np.float32))/n_test\n",
    "    hit = (hit_head + hit_tail)/2.0\n",
    "    return hit_head, hit_tail, hit\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def test(n_test):\n",
    "    predicted_tail = []\n",
    "    norm_predicted_tail = []\n",
    "    predicted_head = []\n",
    "    norm_predicted_head = []\n",
    "    np.random.shuffle(testing_data)\n",
    "    for i in range(n_test):\n",
    "        print('[%.2f sec] --- testing[%d/%d]' %(timeit.default_timer()-start, i+1, n_test), end='\\r')\n",
    "        t = testing_data[i]\n",
    "        hrank, fhrank, trank, ftrank, norm_hrank, norm_fhrank, norm_trank, norm_ftrank, \\\n",
    "                predicted_head_tmp, predicted_tail_tmp, norm_predicted_head_tmp, norm_predicted_tail_tmp = test_one_sample(model, t, session)\n",
    "        #print(hrank, fhrank, trank, ftrank, norm_hrank, norm_fhrank, norm_trank, norm_ftrank)\n",
    "        rank_head.append(hrank)\n",
    "        rank_tail.append(trank)\n",
    "        filter_rank_head.append(fhrank)\n",
    "        filter_rank_tail.append(ftrank)\n",
    "        \n",
    "        norm_rank_head.append(norm_hrank)\n",
    "        norm_rank_tail.append(norm_trank)\n",
    "        norm_filter_rank_head.append(norm_fhrank)\n",
    "        norm_filter_rank_tail.append(norm_ftrank)\n",
    "\n",
    "        predicted_tail.append(predicted_tail_tmp)\n",
    "        norm_predicted_tail.append(norm_predicted_tail_tmp)\n",
    "        predicted_head.append(predicted_head_tmp)\n",
    "        norm_predicted_head.append(norm_predicted_head_tmp)\n",
    "    mean_rank_head = np.sum(rank_head, dtype=np.float32)/n_test\n",
    "    mean_rank_tail = np.sum(rank_tail, dtype=np.float32)/n_test\n",
    "    filter_mean_rank_head = np.sum(filter_rank_head, dtype=np.float32)/n_test\n",
    "    filter_mean_rank_tail = np.sum(filter_rank_tail, dtype=np.float32)/n_test\n",
    "\n",
    "    norm_mean_rank_head = np.sum(norm_rank_head, dtype=np.float32)/n_test\n",
    "    norm_mean_rank_tail = np.sum(norm_rank_tail, dtype=np.float32)/n_test\n",
    "    norm_filter_mean_rank_head = np.sum(norm_filter_rank_head, dtype=np.float32)/n_test\n",
    "    norm_filter_mean_rank_tail = np.sum(norm_filter_rank_tail, dtype=np.float32)/n_test\n",
    "\n",
    "    mean_reciprocal_rank_head = np.sum(1.0/np.asarray(rank_head, dtype=np.float32))/n_test\n",
    "    mean_reciprocal_rank_tail = np.sum(1.0/np.asarray(rank_tail, dtype=np.float32))/n_test\n",
    "    filter_mean_reciprocal_rank_head = np.sum(1.0/np.asarray(filter_rank_head, dtype=np.float32))/n_test\n",
    "    filter_mean_reciprocal_rank_tail = np.sum(1.0/np.asarray(filter_rank_tail, dtype=np.float32))/n_test\n",
    "\n",
    "    hit1_head, hit1_tail, hit1 = hit(rank_head, rank_tail, 1)\n",
    "    filter_hit1_head, filter_hit1_tail, filter_hit1 = hit(filter_rank_head, filter_rank_tail, 1)\n",
    "    hit3_head, hit3_tail, hit3 = hit(rank_head, rank_tail, 3)\n",
    "    filter_hit3_head, filter_hit3_tail, filter_hit3 = hit(filter_rank_head, filter_rank_tail, 3)\n",
    "    hit10_head, hit10_tail, hit10 = hit(rank_head, rank_tail, 10)\n",
    "    filter_hit10_head, filter_hit10_tail, filter_hit10 = hit(filter_rank_head, filter_rank_tail, 10)\n",
    "\n",
    "\n",
    "    print('iter:%d --MR: %.2f  --MRR: %.2f  --hit@1: %.2f   --hit@3: %.2f    --hit@10: %.2f' %(n_iter, (mean_rank_head+ mean_rank_tail)/2, \n",
    "                                                            (mean_reciprocal_rank_head + mean_reciprocal_rank_tail)/2, \n",
    "                                                            hit1, hit3, hit3))\n",
    "    print('iter:%d --FMR: %.2f --FMRR: %.2f --Fhit@1: %.2f  --Fhit@3: %.2f   --Fhit@10: %.2f' %(n_iter, (filter_mean_rank_head+ filter_mean_rank_tail)/2, \n",
    "                                                                    (filter_mean_reciprocal_rank_head + filter_mean_reciprocal_rank_tail)/2,\n",
    "                                                                    filter_hit1, filter_hit3, filter_hit10)) \n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<__main__.Args object at 0x134bdc828>\n",
      "loading entity2id.txt ...\n",
      "loading reltion2id.txt ...\n",
      "entity number: 14541\n",
      "relation number: 237\n",
      "training triple number: 272115\n",
      "testing triple number: 20466\n",
      "valid triple number: 17535\n",
      "finish preparing data. \n",
      "WARNING:tensorflow:From /Users/zhangwen/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Colocations handled automatically by placer.\n",
      "finishing initializing\n",
      "WARNING:tensorflow:From /Users/zhangwen/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n",
      "WARNING:tensorflow:From <ipython-input-2-22c1244adb25>:255: calling l2_normalize (from tensorflow.python.ops.nn_impl) with dim is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "dim is deprecated, use axis instead\n"
     ]
    }
   ],
   "source": [
    "# 设置参数等\n",
    "args  = Args()\n",
    "args.data_dir = './data/FB15k-237/'\n",
    "args.learning_rate = 0.005\n",
    "args.batch_size = 2048\n",
    "args.max_iter = 200\n",
    "args.optimizer = 'adam'\n",
    "args.dimension = 300\n",
    "args.margin = 3\n",
    "args.norm = 'L2'\n",
    "args.evaluation_size = 500\n",
    "args.save_dir = 'output/'\n",
    "args.negative_sampling = 'bern'\n",
    "args.evaluate_per_iteration = 1\n",
    "args.evaluate_worker = 3\n",
    "args.regularizer_weight = 1e-7\n",
    "args.n_test = 100\n",
    "args.save_per = 100\n",
    "args.n_worker = 5\n",
    "args.max_iter = 50\n",
    "\n",
    "print(args)\n",
    "model = DistMult(negative_sampling=args.negative_sampling, data_dir=args.data_dir,\n",
    "                learning_rate=args.learning_rate, batch_size=args.batch_size,\n",
    "                max_iter=args.max_iter, margin=args.margin, \n",
    "                dimension=args.dimension, norm=args.norm, evaluation_size=args.evaluation_size, \n",
    "                regularizer_weight = args.regularizer_weight)\n",
    "\n",
    "train_triple_positive_input, train_triple_negative_input, loss, op_train, norm_entity = train_operation(model, learning_rate = args.learning_rate, margin = args.margin, optimizer_str = args.optimizer)\n",
    "test_triple, head_rank, tail_rank , norm_head_rank, norm_tail_rank= test_operation(model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /Users/zhangwen/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py:193: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n",
      "Instructions for updating:\n",
      "Use `tf.global_variables_initializer` instead.\n",
      "iter[0] ---loss: 101.43665 ---time: 40.85 ---prepare time : 0.31\n",
      "Model saved at ./save/DistMult_0.ckpt\n",
      "iter:0 --MR: 5960.71  --MRR: 0.01  --hit@1: 0.01   --hit@3: 0.01    --hit@10: 0.01\n",
      "iter:0 --FMR: 5852.61 --FMRR: 0.01 --Fhit@1: 0.01  --Fhit@3: 0.01   --Fhit@10: 0.01\n",
      "iter[1] ---loss: 93.67135 ---time: 39.79 ---prepare time : 0.31\n",
      "iter:1 --MR: 2718.01  --MRR: 0.06  --hit@1: 0.02   --hit@3: 0.07    --hit@10: 0.07\n",
      "iter:1 --FMR: 2562.93 --FMRR: 0.13 --Fhit@1: 0.07  --Fhit@3: 0.16   --Fhit@10: 0.24\n",
      "iter[2] ---loss: 78.94437 ---time: 49.94 ---prepare time : 0.41\n",
      "iter:2 --MR: 2096.07  --MRR: 0.08  --hit@1: 0.04   --hit@3: 0.07    --hit@10: 0.07\n",
      "iter:2 --FMR: 1926.93 --FMRR: 0.13 --Fhit@1: 0.08  --Fhit@3: 0.15   --Fhit@10: 0.23\n",
      "iter[3] ---loss: 49.86318 ---time: 41.80 ---prepare time : 0.35\n",
      "iter:3 --MR: 1324.98  --MRR: 0.05  --hit@1: 0.01   --hit@3: 0.04    --hit@10: 0.04\n",
      "iter:3 --FMR: 1133.86 --FMRR: 0.13 --Fhit@1: 0.08  --Fhit@3: 0.14   --Fhit@10: 0.21\n",
      "iter[4] ---loss: 30.59411 ---time: 42.51 ---prepare time : 0.35\n",
      "iter:4 --MR: 766.93  --MRR: 0.09  --hit@1: 0.04   --hit@3: 0.10    --hit@10: 0.10\n",
      "iter:4 --FMR: 648.05 --FMRR: 0.14 --Fhit@1: 0.07  --Fhit@3: 0.17   --Fhit@10: 0.28\n",
      "iter[5] ---loss: 24.33618 ---time: 44.83 ---prepare time : 0.37\n",
      "iter:5 --MR: 630.65  --MRR: 0.11  --hit@1: 0.06   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:5 --FMR: 428.42 --FMRR: 0.20 --Fhit@1: 0.13  --Fhit@3: 0.22   --Fhit@10: 0.30\n",
      "iter[6] ---loss: 21.58233 ---time: 39.38 ---prepare time : 0.30\n",
      "iter:6 --MR: 571.12  --MRR: 0.12  --hit@1: 0.06   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:6 --FMR: 318.72 --FMRR: 0.22 --Fhit@1: 0.15  --Fhit@3: 0.22   --Fhit@10: 0.34\n",
      "iter[7] ---loss: 19.98230 ---time: 39.30 ---prepare time : 0.30\n",
      "iter:7 --MR: 372.56  --MRR: 0.11  --hit@1: 0.07   --hit@3: 0.11    --hit@10: 0.11\n",
      "iter:7 --FMR: 248.62 --FMRR: 0.20 --Fhit@1: 0.14  --Fhit@3: 0.17   --Fhit@10: 0.34\n",
      "iter[8] ---loss: 19.03058 ---time: 39.30 ---prepare time : 0.30\n",
      "iter:8 --MR: 551.56  --MRR: 0.13  --hit@1: 0.08   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:8 --FMR: 379.24 --FMRR: 0.20 --Fhit@1: 0.14  --Fhit@3: 0.21   --Fhit@10: 0.34\n",
      "iter[9] ---loss: 18.29465 ---time: 39.36 ---prepare time : 0.30\n",
      "iter:9 --MR: 731.96  --MRR: 0.10  --hit@1: 0.03   --hit@3: 0.13    --hit@10: 0.13\n",
      "iter:9 --FMR: 558.88 --FMRR: 0.20 --Fhit@1: 0.12  --Fhit@3: 0.24   --Fhit@10: 0.35\n",
      "iter[10] ---loss: 17.69661 ---time: 39.28 ---prepare time : 0.30\n",
      "iter:10 --MR: 550.88  --MRR: 0.12  --hit@1: 0.07   --hit@3: 0.13    --hit@10: 0.13\n",
      "iter:10 --FMR: 332.91 --FMRR: 0.20 --Fhit@1: 0.11  --Fhit@3: 0.23   --Fhit@10: 0.36\n",
      "iter[11] ---loss: 17.15010 ---time: 39.63 ---prepare time : 0.31\n",
      "iter:11 --MR: 713.44  --MRR: 0.12  --hit@1: 0.07   --hit@3: 0.14    --hit@10: 0.14\n",
      "iter:11 --FMR: 489.20 --FMRR: 0.20 --Fhit@1: 0.15  --Fhit@3: 0.20   --Fhit@10: 0.32\n",
      "iter[12] ---loss: 16.91760 ---time: 40.06 ---prepare time : 0.32\n",
      "iter:12 --MR: 782.40  --MRR: 0.11  --hit@1: 0.05   --hit@3: 0.11    --hit@10: 0.11\n",
      "iter:12 --FMR: 520.36 --FMRR: 0.19 --Fhit@1: 0.11  --Fhit@3: 0.21   --Fhit@10: 0.38\n",
      "iter[13] ---loss: 16.54332 ---time: 40.07 ---prepare time : 0.32\n",
      "iter:13 --MR: 1040.62  --MRR: 0.10  --hit@1: 0.04   --hit@3: 0.09    --hit@10: 0.09\n",
      "iter:13 --FMR: 858.83 --FMRR: 0.17 --Fhit@1: 0.08  --Fhit@3: 0.18   --Fhit@10: 0.35\n",
      "iter[14] ---loss: 16.09099 ---time: 40.25 ---prepare time : 0.32\n",
      "iter:14 --MR: 813.97  --MRR: 0.11  --hit@1: 0.06   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:14 --FMR: 573.77 --FMRR: 0.22 --Fhit@1: 0.15  --Fhit@3: 0.23   --Fhit@10: 0.33\n",
      "iter[15] ---loss: 15.78657 ---time: 39.94 ---prepare time : 0.30\n",
      "iter:15 --MR: 835.24  --MRR: 0.08  --hit@1: 0.02   --hit@3: 0.09    --hit@10: 0.09\n",
      "iter:15 --FMR: 628.27 --FMRR: 0.19 --Fhit@1: 0.11  --Fhit@3: 0.22   --Fhit@10: 0.35\n",
      "iter[16] ---loss: 15.63762 ---time: 40.02 ---prepare time : 0.31\n",
      "iter:16 --MR: 1015.33  --MRR: 0.13  --hit@1: 0.07   --hit@3: 0.13    --hit@10: 0.13\n",
      "iter:16 --FMR: 813.29 --FMRR: 0.21 --Fhit@1: 0.12  --Fhit@3: 0.25   --Fhit@10: 0.39\n",
      "iter[17] ---loss: 15.47159 ---time: 40.46 ---prepare time : 0.31\n",
      "iter:17 --MR: 972.34  --MRR: 0.12  --hit@1: 0.06   --hit@3: 0.10    --hit@10: 0.10\n",
      "iter:17 --FMR: 702.09 --FMRR: 0.21 --Fhit@1: 0.14  --Fhit@3: 0.21   --Fhit@10: 0.36\n",
      "iter[18] ---loss: 15.18859 ---time: 40.76 ---prepare time : 0.33\n",
      "iter:18 --MR: 1091.11  --MRR: 0.14  --hit@1: 0.09   --hit@3: 0.14    --hit@10: 0.14\n",
      "iter:18 --FMR: 842.73 --FMRR: 0.24 --Fhit@1: 0.17  --Fhit@3: 0.26   --Fhit@10: 0.38\n",
      "iter[19] ---loss: 15.04257 ---time: 43.53 ---prepare time : 0.38\n",
      "iter:19 --MR: 1114.14  --MRR: 0.12  --hit@1: 0.07   --hit@3: 0.14    --hit@10: 0.14\n",
      "iter:19 --FMR: 856.15 --FMRR: 0.18 --Fhit@1: 0.11  --Fhit@3: 0.21   --Fhit@10: 0.34\n",
      "iter[20] ---loss: 14.99883 ---time: 40.13 ---prepare time : 0.31\n",
      "iter:20 --MR: 683.88  --MRR: 0.09  --hit@1: 0.04   --hit@3: 0.10    --hit@10: 0.10\n",
      "iter:20 --FMR: 530.08 --FMRR: 0.13 --Fhit@1: 0.06  --Fhit@3: 0.15   --Fhit@10: 0.28\n",
      "iter[21] ---loss: 14.76934 ---time: 40.43 ---prepare time : 0.33\n",
      "iter:21 --MR: 897.62  --MRR: 0.15  --hit@1: 0.09   --hit@3: 0.16    --hit@10: 0.16\n",
      "iter:21 --FMR: 631.27 --FMRR: 0.23 --Fhit@1: 0.15  --Fhit@3: 0.26   --Fhit@10: 0.38\n",
      "iter[22] ---loss: 14.50925 ---time: 40.34 ---prepare time : 0.34\n",
      "iter:22 --MR: 786.77  --MRR: 0.11  --hit@1: 0.06   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:22 --FMR: 634.55 --FMRR: 0.23 --Fhit@1: 0.15  --Fhit@3: 0.25   --Fhit@10: 0.36\n",
      "iter[23] ---loss: 14.36878 ---time: 40.08 ---prepare time : 0.32\n",
      "iter:23 --MR: 770.01  --MRR: 0.11  --hit@1: 0.04   --hit@3: 0.13    --hit@10: 0.13\n",
      "iter:23 --FMR: 556.22 --FMRR: 0.21 --Fhit@1: 0.15  --Fhit@3: 0.23   --Fhit@10: 0.34\n",
      "iter[24] ---loss: 14.31562 ---time: 40.22 ---prepare time : 0.32\n",
      "iter:24 --MR: 998.93  --MRR: 0.10  --hit@1: 0.04   --hit@3: 0.11    --hit@10: 0.11\n",
      "iter:24 --FMR: 808.55 --FMRR: 0.18 --Fhit@1: 0.11  --Fhit@3: 0.20   --Fhit@10: 0.32\n",
      "iter[25] ---loss: 14.17580 ---time: 40.40 ---prepare time : 0.32\n",
      "iter:25 --MR: 673.90  --MRR: 0.09  --hit@1: 0.04   --hit@3: 0.09    --hit@10: 0.09\n",
      "iter:25 --FMR: 454.42 --FMRR: 0.18 --Fhit@1: 0.09  --Fhit@3: 0.22   --Fhit@10: 0.37\n",
      "iter[26] ---loss: 13.98122 ---time: 39.93 ---prepare time : 0.32\n",
      "iter:26 --MR: 874.62  --MRR: 0.10  --hit@1: 0.04   --hit@3: 0.11    --hit@10: 0.11\n",
      "iter:26 --FMR: 710.28 --FMRR: 0.18 --Fhit@1: 0.12  --Fhit@3: 0.19   --Fhit@10: 0.32\n",
      "iter[27] ---loss: 13.91834 ---time: 40.70 ---prepare time : 0.32\n",
      "iter:27 --MR: 591.43  --MRR: 0.12  --hit@1: 0.08   --hit@3: 0.10    --hit@10: 0.10\n",
      "iter:27 --FMR: 434.46 --FMRR: 0.20 --Fhit@1: 0.14  --Fhit@3: 0.20   --Fhit@10: 0.34\n",
      "iter[28] ---loss: 13.77496 ---time: 40.15 ---prepare time : 0.32\n",
      "iter:28 --MR: 643.53  --MRR: 0.13  --hit@1: 0.09   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:28 --FMR: 421.69 --FMRR: 0.19 --Fhit@1: 0.12  --Fhit@3: 0.21   --Fhit@10: 0.34\n",
      "iter[29] ---loss: 13.70199 ---time: 40.33 ---prepare time : 0.32\n",
      "iter:29 --MR: 658.51  --MRR: 0.12  --hit@1: 0.06   --hit@3: 0.10    --hit@10: 0.10\n",
      "iter:29 --FMR: 468.47 --FMRR: 0.21 --Fhit@1: 0.12  --Fhit@3: 0.29   --Fhit@10: 0.36\n",
      "iter[30] ---loss: 13.67740 ---time: 42.25 ---prepare time : 0.35\n",
      "iter:30 --MR: 595.44  --MRR: 0.12  --hit@1: 0.07   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:30 --FMR: 473.49 --FMRR: 0.21 --Fhit@1: 0.15  --Fhit@3: 0.23   --Fhit@10: 0.34\n",
      "iter[31] ---loss: 13.50902 ---time: 42.40 ---prepare time : 0.37\n",
      "iter:31 --MR: 634.14  --MRR: 0.11  --hit@1: 0.05   --hit@3: 0.10    --hit@10: 0.10\n",
      "iter:31 --FMR: 427.67 --FMRR: 0.20 --Fhit@1: 0.10  --Fhit@3: 0.25   --Fhit@10: 0.41\n",
      "iter[32] ---loss: 13.53540 ---time: 44.46 ---prepare time : 0.37\n",
      "iter:32 --MR: 946.41  --MRR: 0.12  --hit@1: 0.07   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:32 --FMR: 764.40 --FMRR: 0.20 --Fhit@1: 0.14  --Fhit@3: 0.20   --Fhit@10: 0.37\n",
      "iter[33] ---loss: 13.39291 ---time: 42.38 ---prepare time : 0.35\n",
      "iter:33 --MR: 631.19  --MRR: 0.13  --hit@1: 0.08   --hit@3: 0.13    --hit@10: 0.13\n",
      "iter:33 --FMR: 475.39 --FMRR: 0.22 --Fhit@1: 0.15  --Fhit@3: 0.24   --Fhit@10: 0.38\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "iter[34] ---loss: 13.27316 ---time: 47.34 ---prepare time : 0.42\n",
      "iter:34 --MR: 608.02  --MRR: 0.14  --hit@1: 0.06   --hit@3: 0.17    --hit@10: 0.17\n",
      "iter:34 --FMR: 419.53 --FMRR: 0.26 --Fhit@1: 0.16  --Fhit@3: 0.28   --Fhit@10: 0.45\n",
      "iter[35] ---loss: 13.26994 ---time: 46.48 ---prepare time : 0.43\n",
      "iter:35 --MR: 753.16  --MRR: 0.13  --hit@1: 0.07   --hit@3: 0.14    --hit@10: 0.14\n",
      "iter:35 --FMR: 514.03 --FMRR: 0.24 --Fhit@1: 0.17  --Fhit@3: 0.23   --Fhit@10: 0.45\n",
      "iter[36] ---loss: 13.07923 ---time: 44.25 ---prepare time : 0.38\n",
      "iter:36 --MR: 1049.81  --MRR: 0.11  --hit@1: 0.05   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:36 --FMR: 856.95 --FMRR: 0.19 --Fhit@1: 0.11  --Fhit@3: 0.22   --Fhit@10: 0.30\n",
      "iter[37] ---loss: 13.03575 ---time: 44.58 ---prepare time : 0.41\n",
      "iter:37 --MR: 945.20  --MRR: 0.12  --hit@1: 0.06   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:37 --FMR: 600.32 --FMRR: 0.18 --Fhit@1: 0.09  --Fhit@3: 0.21   --Fhit@10: 0.39\n",
      "iter[38] ---loss: 13.01621 ---time: 41.04 ---prepare time : 0.33\n",
      "iter:38 --MR: 597.23  --MRR: 0.11  --hit@1: 0.04   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:38 --FMR: 447.21 --FMRR: 0.20 --Fhit@1: 0.13  --Fhit@3: 0.21   --Fhit@10: 0.37\n",
      "iter[39] ---loss: 12.98665 ---time: 40.91 ---prepare time : 0.33\n",
      "iter:39 --MR: 853.60  --MRR: 0.11  --hit@1: 0.05   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:39 --FMR: 595.77 --FMRR: 0.19 --Fhit@1: 0.11  --Fhit@3: 0.20   --Fhit@10: 0.35\n",
      "iter[40] ---loss: 12.84809 ---time: 44.18 ---prepare time : 0.38\n",
      "iter:40 --MR: 973.75  --MRR: 0.10  --hit@1: 0.04   --hit@3: 0.12    --hit@10: 0.12\n",
      "iter:40 --FMR: 716.00 --FMRR: 0.22 --Fhit@1: 0.13  --Fhit@3: 0.25   --Fhit@10: 0.40\n",
      "[32.89 sec](90/132): -- loss: 0.09590\r"
     ]
    }
   ],
   "source": [
    "# 训练模型\n",
    "config = tf.ConfigProto()\n",
    "config.gpu_options.allow_growth = False\n",
    "config.log_device_placement = False\n",
    "config.allow_soft_placement = True\n",
    "config.gpu_options.per_process_gpu_memory_fraction=0.68\n",
    "session = tf.Session(config=config)\n",
    "session.as_default()\n",
    "\n",
    "tf.initialize_all_variables().run(session=session)\n",
    "saver = tf.train.Saver()\n",
    "\n",
    "\n",
    "for n_iter in range(args.max_iter):\n",
    "    accu_loss =0.\n",
    "    batch = 0\n",
    "    num_batch = model.num_triple_train/args.batch_size\n",
    "    start_time = timeit.default_timer()\n",
    "    prepare_time = 0.\n",
    "\n",
    "    for tp, tn , t in  model.training_data_batch(batch_size= args.batch_size):\n",
    "        l, _, norm_e = session.run([loss, op_train, norm_entity], {train_triple_positive_input:tp, train_triple_negative_input: tn})\n",
    "        accu_loss += l\n",
    "        batch += 1\n",
    "        print('[%.2f sec](%d/%d): -- loss: %.5f' %(timeit.default_timer()-start_time, batch, num_batch , l), end='\\r')\n",
    "        prepare_time += t\n",
    "    if n_iter%1 == 0:\n",
    "        print('iter[%d] ---loss: %.5f ---time: %.2f ---prepare time : %.2f' %(n_iter, accu_loss, timeit.default_timer()-start_time, prepare_time))\n",
    "\n",
    "    if n_iter % args.save_per == 0 or n_iter ==0 or n_iter == args.max_iter-1:\n",
    "        save_path = saver.save(session, os.path.join('./save/DistMult_' + str(n_iter) + '.ckpt'))\n",
    "        print('Model saved at %s' % save_path)\n",
    "\n",
    "    if n_iter %args.evaluate_per_iteration == 0 or n_iter ==0 or n_iter == args.max_iter-1:\n",
    "        rank_head = []\n",
    "        rank_tail = []\n",
    "        filter_rank_head = []\n",
    "        filter_rank_tail = []\n",
    "\n",
    "        norm_rank_head = []\n",
    "        norm_rank_tail = []\n",
    "        norm_filter_rank_head = []\n",
    "        norm_filter_rank_tail = []\n",
    "\n",
    "        start = timeit.default_timer()\n",
    "        testing_data = model.testing_data\n",
    "        hr_t = model.hr_t\n",
    "        tr_h = model.tr_h\n",
    "        n_test = args.n_test\n",
    "        test(n_test)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 测试模型\n",
    "predicted_tail = []\n",
    "norm_predicted_tail = []\n",
    "predicted_head = []\n",
    "norm_predicted_head = []\n",
    "\n",
    "rank_head = []\n",
    "rank_tail = []\n",
    "filter_rank_head = []\n",
    "filter_rank_tail = []\n",
    "\n",
    "norm_rank_head = []\n",
    "norm_rank_tail = []\n",
    "norm_filter_rank_head = []\n",
    "norm_filter_rank_tail = []\n",
    "\n",
    "start = timeit.default_timer()\n",
    "testing_data = model.testing_data\n",
    "# hr_t = model.hr_t\n",
    "# tr_h = model.tr_h\n",
    "n_test = args.n_test\n",
    "if n_iter == args.max_iter-1:\tn_test = model.num_triple_test\n",
    "predicted_tail = []\n",
    "norm_predicted_tail = []\n",
    "predicted_head = []\n",
    "norm_predicted_head = []\n",
    "test(n_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们按照设置的参数跑了40个iteration，可以看到DistMult在FB15k-237的数据集上已经有了明显的预测效果。\n",
    "本demo中不包括调参的部分，有兴趣的同学可以阅读原文并自行尝试不同的参数组合，并观察对模型训练和预测结果的影响 :-)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
