{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# DRKG Edge Score Analysis\n",
    "This nodebook shows how to analyze the (h, r, t) edge score. We use $$\\mbox{score} = \\gamma - ||\\mathbf{h}+\\mathbf{r}-\\mathbf{t}||_{2}$$ as the score function here, which is compatable with the training methodology (Please refer to Train_embeddings.ipynb for more details). Here $\\gamma$ is a constant we used in training of the TransE model that is set to 12.0. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Preparing data for Edge Score Analysis\n",
    "\n",
    "In order to avoid the possible bias of over-fitting thetriplets in the training set, we split the whole DRKG into 10 equal folds and train 10 differ-ent models by picking each fold as the test setand the rest other nine folds are the trainingset.\n",
    "\n",
    "Please make sure you have already installed pytorch, dgl and dgl-ke packages.\n",
    "\n",
    "First we load the drkg dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import sys\n",
    "sys.path.insert(1, '../utils')\n",
    "from utils import download_and_extract\n",
    "download_and_extract()\n",
    "drkg_file = '../data/drkg/drkg.tsv'\n",
    "df = pd.read_csv(drkg_file, sep=\"\\t\")\n",
    "triples = df.values.tolist()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create a directory to store the ten fold training data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir train/ten_fold"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Split dataset into 10 equal parts."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "num_triples = len(triples)\n",
    "import numpy as np\n",
    "seed = np.arange(num_triples)\n",
    "np.random.shuffle(seed)\n",
    "\n",
    "fold_size = int((num_triples + 10) * 0.1)\n",
    "total = 0\n",
    "for i in range(10):\n",
    "    fold_edge_cnt = fold_size if total + fold_size < num_triples else num_triples - total\n",
    "    fold_edges = seed[total:total+fold_edge_cnt]\n",
    "    with open(\"train/ten_fold/part{}.tsv\".format(i), 'w+') as f:\n",
    "        for idx in fold_edges:\n",
    "            f.writelines(\"{}\\t{}\\t{}\\n\".format(triples[idx][0], triples[idx][1], triples[idx][2]))\n",
    "    total += fold_edge_cnt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Build ten training dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "for i in range(10):\n",
    "    os.mkdir(os.path.join(\"./train/ten_fold/\", \"part{}\".format(i)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from shutil import copyfile\n",
    "\n",
    "for i in range(10):\n",
    "    plit_triples = []\n",
    "    for j in range(10):\n",
    "        if i == j:\n",
    "            continue\n",
    "        with open(\"./train/ten_fold/part{}.tsv\".format(j), 'r') as f:\n",
    "            for line in f:\n",
    "                plit_triples.append(line)\n",
    "    \n",
    "    with open(os.path.join(os.path.join(\"./train/ten_fold/\", \"part{}\".format(i)), \"skip_part{}.tsv\".format(i)), 'w+') as f:\n",
    "        f.writelines(plit_triples)\n",
    "    copyfile(os.path.join(\"./train/ten_fold/\", \"part{}.tsv\".format(i)),\n",
    "             os.path.join(os.path.join(\"./train/ten_fold/\", \"part{}\".format(i)), \"part{}.tsv\".format(i)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we get ten directorys uner ./train/ten_fold/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!ls  train/ten_fold"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then we need to run ten training process"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part0/ --data_files skip_part0.tsv part0.tsv part0.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part1/ --data_files skip_part1.tsv part1.tsv part1.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part2/ --data_files skip_part2.tsv part2.tsv part2.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part3/ --data_files skip_part3.tsv part3.tsv part3.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part4/ --data_files skip_part4.tsv part4.tsv part4.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part5/ --data_files skip_part5.tsv part5.tsv part5.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part6/ --data_files skip_part6.tsv part6.tsv part6.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part7/ --data_files skip_part7.tsv part7.tsv part7.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part8/ --data_files skip_part8.tsv part8.tsv part8.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!DGLBACKEND=pytorch dglke_train --dataset DRKG --data_path ./train/ten_fold/part9/ --data_files skip_part9.tsv part9.tsv part9.tsv --format 'raw_udd_hrt' --model_name TransE_l2 --batch_size 2048 \\\n",
    "--neg_sample_size 256 --hidden_dim 400 --gamma 12.0 --lr 0.1 --max_step 100000 --log_interval 1000 --batch_size_eval 16 -adv --regularization_coef 1.00E-07 --test --num_thread 1 --gpu 0 1 2 3 4 5 6 7 --num_proc 8 --neg_sample_size_eval 10000 --async_update"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Loading Entity ID Mapping"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import os\n",
    "import csv\n",
    "import torch as th"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part0/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part0/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_7/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_7/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p0_rows = []\n",
    "with open(\"./train/ten_fold/part0/part0.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p0_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "gamma=12.0\n",
    "def transE_l2(head, rel, tail):\n",
    "    score = head + rel - tail\n",
    "    #return th.norm(score, p=2, dim=-1)\n",
    "    return gamma - th.norm(score, p=2, dim=-1)\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p0_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p0_l2_score))\n",
    "print(th.min(p0_l2_score))\n",
    "\n",
    "print(p0_l2_score.shape[0])\n",
    "p0_rel_score = {}\n",
    "for i in range(p0_l2_score.shape[0]):\n",
    "    rel = p0_rows[i][1]\n",
    "    if p0_rel_score.get(rel, None) is None:\n",
    "        p0_rel_score[rel] = []\n",
    "    p0_rel_score[rel].append(p0_l2_score[i])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part1/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part1/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_8/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_8/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p1_rows = []\n",
    "with open(\"./train/ten_fold/part1/part1.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p1_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p1_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p1_l2_score))\n",
    "print(th.min(p1_l2_score))\n",
    "print(p1_l2_score.shape[0])\n",
    "p1_rel_score = {}\n",
    "for i in range(p1_l2_score.shape[0]):\n",
    "    rel = p1_rows[i][1]\n",
    "    if p1_rel_score.get(rel, None) is None:\n",
    "        p1_rel_score[rel] = []\n",
    "    p1_rel_score[rel].append(p1_l2_score[i])\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part2/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part2/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_9/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_9/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p2_rows = []\n",
    "with open(\"./train/ten_fold/part2/part2.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p2_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p2_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p2_l2_score))\n",
    "print(th.min(p2_l2_score))\n",
    "\n",
    "p2_rel_score = {}\n",
    "for i in range(p2_l2_score.shape[0]):\n",
    "    rel = p2_rows[i][1]\n",
    "    if p2_rel_score.get(rel, None) is None:\n",
    "        p2_rel_score[rel] = []\n",
    "    p2_rel_score[rel].append(p2_l2_score[i])\n",
    "    \n",
    "#for key, rel_score in p2_rel_score.items():\n",
    "#    print(\"{}:{}\".format(key, len(rel_score)))\n",
    "#    plt.hist(rel_score)\n",
    "#    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part3/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part3/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_10/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_10/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p3_rows = []\n",
    "with open(\"./train/ten_fold/part3/part3.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p3_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p3_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p3_l2_score))\n",
    "print(th.min(p3_l2_score))\n",
    "\n",
    "p3_rel_score = {}\n",
    "for i in range(p3_l2_score.shape[0]):\n",
    "    rel = p3_rows[i][1]\n",
    "    if p3_rel_score.get(rel, None) is None:\n",
    "        p3_rel_score[rel] = []\n",
    "    p3_rel_score[rel].append(p3_l2_score[i])\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part4/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part4/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_11/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_11/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p4_rows = []\n",
    "with open(\"./train/ten_fold/part4/part4.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p4_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p4_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p4_l2_score))\n",
    "print(th.min(p4_l2_score))\n",
    "\n",
    "p4_rel_score = {}\n",
    "for i in range(p4_l2_score.shape[0]):\n",
    "    rel = p4_rows[i][1]\n",
    "    if p4_rel_score.get(rel, None) is None:\n",
    "        p4_rel_score[rel] = []\n",
    "    p4_rel_score[rel].append(p4_l2_score[i])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part5/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part5/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_12/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_12/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p5_rows = []\n",
    "with open(\"./train/ten_fold/part5/part5.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p5_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p5_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p5_l2_score))\n",
    "print(th.min(p5_l2_score))\n",
    "\n",
    "p5_rel_score = {}\n",
    "for i in range(p5_l2_score.shape[0]):\n",
    "    rel = p5_rows[i][1]\n",
    "    if p5_rel_score.get(rel, None) is None:\n",
    "        p5_rel_score[rel] = []\n",
    "    p5_rel_score[rel].append(p5_l2_score[i])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part6"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part6/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part6/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_13/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_13/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p6_rows = []\n",
    "with open(\"./train/ten_fold/part6/part6.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p6_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p6_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p6_l2_score))\n",
    "print(th.min(p6_l2_score))\n",
    "\n",
    "p6_rel_score = {}\n",
    "for i in range(p6_l2_score.shape[0]):\n",
    "    rel = p6_rows[i][1]\n",
    "    if p6_rel_score.get(rel, None) is None:\n",
    "        p6_rel_score[rel] = []\n",
    "    p6_rel_score[rel].append(p6_l2_score[i])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part7"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part7/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part7/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_14/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_14/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p7_rows = []\n",
    "with open(\"./train/ten_fold/part7/part7.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p7_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p7_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p7_l2_score))\n",
    "print(th.min(p7_l2_score))\n",
    "\n",
    "p7_rel_score = {}\n",
    "for i in range(p7_l2_score.shape[0]):\n",
    "    rel = p7_rows[i][1]\n",
    "    if p7_rel_score.get(rel, None) is None:\n",
    "        p7_rel_score[rel] = []\n",
    "    p7_rel_score[rel].append(p7_l2_score[i])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part8"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part8/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part8/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_15/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_15/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p8_rows = []\n",
    "with open(\"./train/ten_fold/part8/part8.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p8_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p8_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p8_l2_score))\n",
    "print(th.min(p8_l2_score))\n",
    "\n",
    "p8_rel_score = {}\n",
    "for i in range(p8_l2_score.shape[0]):\n",
    "    rel = p8_rows[i][1]\n",
    "    if p8_rel_score.get(rel, None) is None:\n",
    "        p8_rel_score[rel] = []\n",
    "    p8_rel_score[rel].append(p8_l2_score[i])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge scores of part9"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ids = []\n",
    "entity2id = {}\n",
    "with open(\"./train/ten_fold/part9/entities.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=[ 'entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        entity2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(entity2id))\n",
    "\n",
    "rel2id = {}\n",
    "with open(\"./train/ten_fold/part9/relations.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['entity','id'])\n",
    "    for row_val in reader:\n",
    "        id = row_val['id']\n",
    "\n",
    "        rel2id[row_val['entity']] = int(id)\n",
    "\n",
    "print(len(rel2id))\n",
    "\n",
    "node_emb = np.load('ckpts/TransE_l2_DRKG_16/DRKG_TransE_l2_entity.npy')\n",
    "rel_emb = np.load('ckpts/TransE_l2_DRKG_16/DRKG_TransE_l2_relation.npy')\n",
    "\n",
    "head_ids = []\n",
    "rel_ids = []\n",
    "tail_ids = []\n",
    "p9_rows = []\n",
    "with open(\"./train/ten_fold/part9/part9.tsv\", newline='', encoding='utf-8') as csvfile:\n",
    "    reader = csv.DictReader(csvfile, delimiter='\\t', fieldnames=['head', 'rel', 'tail'])\n",
    "    for row_val in reader:\n",
    "        head = row_val['head']\n",
    "        rel = row_val['rel']\n",
    "        tail = row_val['tail']\n",
    "\n",
    "        head_id = entity2id[head]\n",
    "        rel_id = rel2id[rel]\n",
    "        tail_id = entity2id[tail]\n",
    "        \n",
    "        head_ids.append(head_id)\n",
    "        rel_ids.append(rel_id)\n",
    "        tail_ids.append(tail_id)\n",
    "        p9_rows.append((head, rel, tail))\n",
    "        \n",
    "head_ids = np.array(head_ids)\n",
    "rel_ids = np.array(rel_ids)\n",
    "tail_ids = np.array(tail_ids)\n",
    "triple_ids = np.arange(head_ids.shape[0])\n",
    "\n",
    "with th.no_grad():\n",
    "    node_emb = th.tensor(node_emb)\n",
    "    rel_emb = th.tensor(rel_emb)\n",
    "    head_ids = th.tensor(head_ids)\n",
    "    rel_ids = th.tensor(rel_ids)\n",
    "    tail_ids = th.tensor(tail_ids)\n",
    "\n",
    "    head_embedding = node_emb[head_ids]\n",
    "    rel_embedding = rel_emb[rel_ids]\n",
    "    tail_embedding = node_emb[tail_ids]\n",
    "\n",
    "\n",
    "p9_l2_score = transE_l2(head_embedding, rel_embedding, tail_embedding)\n",
    "print(th.max(p9_l2_score))\n",
    "print(th.min(p9_l2_score))\n",
    "\n",
    "p9_rel_score = {}\n",
    "for i in range(p9_l2_score.shape[0]):\n",
    "    rel = p9_rows[i][1]\n",
    "    if p9_rel_score.get(rel, None) is None:\n",
    "        p9_rel_score[rel] = []\n",
    "    p9_rel_score[rel].append(p9_l2_score[i])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Aggregate all score data together"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rel_score = {}\n",
    "cnt = 0\n",
    "for key, val in p0_rel_score.items():\n",
    "    rel_score[key] = val\n",
    "    cnt += len(val)\n",
    "print(cnt)\n",
    "for key, val in p1_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p2_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p3_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p4_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p5_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p6_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p7_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p8_rel_score.items():\n",
    "    rel_score[key] += val\n",
    "for key, val in p9_rel_score.items():\n",
    "    rel_score[key] += val"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Edge Score Analysis\n",
    "We first collect all the scores together"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "total = 0\n",
    "total_score = []\n",
    "for key, score in rel_score.items():\n",
    "    total += len(score)\n",
    "    score = th.stack(score).numpy()\n",
    "    total_score.append(score)\n",
    "total_score = np.concatenate(total_score)\n",
    "print(total)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then we draw the histogram"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "plt.hist(total_score)\n",
    "plt.xlabel('Edge Scores')\n",
    "plt.ylabel('Number of Edges')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
