{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Chapter 1: Tensor Factorization with Alternating Least Square (ALS)\n",
    "\n",
    "---\n",
    "**About this chapter**: In many real-world applications, data are multi-dimensional tensors by nature rather than table matrices. In this chapter, we first provide a preliminary overview of tensor factorization family. Then, we provide tensor factorization techniques using an iterative Alternating Least Square (ALS), which is a good starting point for understanding tensor factorization. Finally, we adapt two public real-world datasets:\n",
    "\n",
    "- [Urban traffic speed dataset in Guangzhou, China](https://github.com/xinychen/tensor-learning/tree/master/Guangzhou-data-set)\n",
    "- [Metro station passenger flow dataset in Hangzhou, China](https://github.com/xinychen/tensor-learning/tree/master/Hangzhou-data-set)\n",
    "\n",
    "to third-order tensors and evaluate tensor factorization techniques with an application of missing data imputation.\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.1 Tensor Factorization Family\n",
    "\n",
    "### 1.1.1 Tucker Factorization\n",
    "\n",
    "The idea of tensor decomposition/factorization is to find a low-rank structure approximating the original data. In mathematics, Tucker factorization decomposes a tensor into a set of matrices and one small core tensor [[**wiki**](https://en.wikipedia.org/wiki/Tucker_decomposition)]. Formally, given a third-order tensor $\\mathcal{Y}\\in\\mathbb{R}^{M\\times N\\times T}$, the Tucker form of a tensor (also known as Tucker decomposition/factorization) with low-rank $\\left(R_1,R_2,R_3\\right)$ is defined as\n",
    "\n",
    "$$\\mathcal{Y}\\approx\\mathcal{G}\\times_1 U\\times_2 V\\times_3 X,$$\n",
    "where $\\mathcal{G}\\in\\mathbb{R}^{R_1\\times R_2\\times R_3}$ is core tensor, and $U\\in\\mathbb{R}^{M\\times R_1},V\\in\\mathbb{R}^{N\\times R_2},X\\in\\mathbb{R}^{T\\times R_3}$ are factor matrices.\n",
    "\n",
    "Element-wise, for any $(i,j,t)$-th entry in tensor $\\mathcal{Y}$, the above formula of Tucker factorization can be rewritten as\n",
    "\n",
    "$$y_{ijt}\\approx\\sum_{r_1=1}^{R_1}\\sum_{r_2=1}^{R_2}\\sum_{r_3=1}^{R_3}g_{r_1r_2r_3}u_{ir_1}v_{jr_2}x_{tr_3},$$\n",
    "where $i=1,2,...,M$, $j=1,2,...,N$, and $t=1,2,...,T$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "def tucker_combine(core_tensor, mat1, mat2, mat3):\n",
    "    return np.einsum('abc, ia, jb, tc -> ijt', core_tensor, mat1, mat2, mat3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[[0.99405966 1.15972092 0.55399141]\n",
      "  [0.39851988 0.46179283 0.21837429]]\n",
      "\n",
      " [[1.21286676 1.34115242 0.5884397 ]\n",
      "  [0.48507951 0.53247026 0.23070272]]]\n",
      "\n",
      "tensor size:\n",
      "(2, 2, 3)\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "dim1 = 2\n",
    "dim2 = 2\n",
    "dim3 = 3\n",
    "r1 = 2\n",
    "r2 = 2\n",
    "r3 = 2\n",
    "core_tensor = np.random.rand(r1, r2, r3)\n",
    "mat1 = np.random.rand(dim1, r1)\n",
    "mat2 = np.random.rand(dim2, r2)\n",
    "mat3 = np.random.rand(dim3, r3)\n",
    "tensor = tucker_combine(core_tensor, mat1, mat2, mat3)\n",
    "print(tensor)\n",
    "print()\n",
    "print('tensor size:')\n",
    "print(tensor.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.1.2 CP Factorization\n",
    "\n",
    "Another common-used type of tensor factorization is CANDECOMP/PARAFAC (CP) factorization. This form assumes that a data tensor is approximated by a sum of outer products of few factor vectors. Specifically, given a third-order tensor $\\mathcal{Y}\\in\\mathbb{R}^{M\\times N\\times T}$, CP factorization is\n",
    "\n",
    "$$\\mathcal{Y}\\approx\\sum_{r=1}^{R}\\boldsymbol{u}_{r}\\circ\\boldsymbol{v}_{r}\\circ\\boldsymbol{x}_{r},$$\n",
    "where vector $\\boldsymbol{u}_{r}\\in\\mathbb{R}^{M}$ is $r$-th column of factor matrix $U\\in\\mathbb{R}^{M\\times R}$, and there are same definitions for vectors $\\boldsymbol{v}_{r}\\in\\mathbb{R}^{N}$ and $\\boldsymbol{x}_{r}\\in\\mathbb{R}^{T}$ in factor matrices $V\\in\\mathbb{R}^{N\\times R}$ and $X\\in\\mathbb{R}^{T\\times R}$, respectively. In fact, the outer product of these vectors is a rank-one tensor, therefore, we could approximate original data by $R$ rank-one tensors.\n",
    "\n",
    "Element-wise, for any $(i,j,t)$-th entry in tensor $\\mathcal{Y}$, we have\n",
    "\n",
    "$$y_{ijt}\\approx\\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr},$$\n",
    "where $i=1,2,...,M$, $j=1,2,...,N$, and $t=1,2,...,T$. The symbol $\\circ$ denotes vector outer product.\n",
    "\n",
    "- **Example of CP combination**:\n",
    "\n",
    "Given matrices $U=\\left[ \\begin{array}{cc} 1 & 2 \\\\ 3 & 4 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{2\\times 2}$, $V=\\left[ \\begin{array}{cc} 1 & 2 \\\\ 3 & 4 \\\\ 5 & 6 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{3\\times 2}$ and $X=\\left[ \\begin{array}{cc} 1 & 5 \\\\ 2 & 6 \\\\ 3 & 7 \\\\ 4 & 8 \\\\ \\end{array} \\right]\\in\\mathbb{R}^{4\\times 2}$, then if $\\hat{\\mathcal{Y}}=\\sum_{r=1}^{R}\\boldsymbol{u}_{r}\\circ\\boldsymbol{v}_{r}\\circ\\boldsymbol{x}_{r}$, then, we have\n",
    "\n",
    "$$\\hat{Y}_1=\\hat{\\mathcal{Y}}(:,:,1)=\\left[ \\begin{array}{ccc} 31 & 42 & 65 \\\\ 63 & 86 & 135 \\\\ \\end{array} \\right],$$\n",
    "$$\\hat{Y}_2=\\hat{\\mathcal{Y}}(:,:,2)=\\left[ \\begin{array}{ccc} 38 & 52 & 82 \\\\ 78 & 108 & 174 \\\\ \\end{array} \\right],$$\n",
    "$$\\hat{Y}_3=\\hat{\\mathcal{Y}}(:,:,3)=\\left[ \\begin{array}{ccc} 45 & 62 & 99 \\\\ 93 & 130 & 213 \\\\ \\end{array} \\right],$$\n",
    "$$\\hat{Y}_4=\\hat{\\mathcal{Y}}(:,:,4)=\\left[ \\begin{array}{ccc} 52 & 72 & 116 \\\\ 108 & 152 & 252 \\\\ \\end{array} \\right].$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "def cp_combine(mat1, mat2, mat3):\n",
    "    return np.einsum('ir, jr, tr -> ijt', mat1, mat2, mat3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[[ 31  38  45  52]\n",
      "  [ 42  52  62  72]\n",
      "  [ 65  82  99 116]]\n",
      "\n",
      " [[ 63  78  93 108]\n",
      "  [ 86 108 130 152]\n",
      "  [135 174 213 252]]]\n",
      "\n",
      "tensor size:\n",
      "(2, 3, 4)\n"
     ]
    }
   ],
   "source": [
    "U = np.array([[1, 2], [3, 4]])\n",
    "V = np.array([[1, 3], [2, 4], [5, 6]])\n",
    "X = np.array([[1, 5], [2, 6], [3, 7], [4, 8]])\n",
    "print(cp_combine(U, V, X))\n",
    "print()\n",
    "print('tensor size:')\n",
    "print(cp_combine(U, V, X).shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.2 Optimization Problem\n",
    "\n",
    "In this section, we explain Tucker factorization and CP factorization using the Alternating Least Square (ALS) algorithm.\n",
    "\n",
    "### 1.2.1 Tucker Factorization using ALS\n",
    "\n",
    "In Tucker factorization, learning is performed by minimizing the loss function (i.e., sum of residual errors) over core tensor and factor matrices:\n",
    "\n",
    "$$\\min_{\\mathcal{G},U,V,X}\\sum_{(i,j,t)\\in\\Omega}\\left(y_{ijt}-\\sum_{r_1=1}^{R_1}\\sum_{r_2=1}^{R_2}\\sum_{r_3=1}^{R_3}g_{r_1r_2r_3}u_{ir_1}v_{jr_2}x_{tr_3}\\right)^2,$$\n",
    "where $y_{ijt}$ is observed if $(i,j,t)\\in\\Omega$, and $\\Omega$ denotes a set of 3-tuple indices.\n",
    "\n",
    "The main challenge of solving this optimization is the need for learning core tensors and factor matrices simultaneously. One way is through the alternating Least Square (ALS) algorithm, which devides parameters into several disjoint sub-problems and iteratively minimizes the loss function with respect to the parameter of each sub-problem. To be specific, ALS optimizes alternatively over one of these parameters, such as $\\mathcal{G}$, $U$, $V$, or $X$ in Tucker factorization, while keeping others fixed. The reason is that the separated sub-problem in one parameter is convex.\n",
    "\n",
    "We could, for example, consider the following optimization problem for core tensor $\\mathcal{G}\\in\\mathbb{R}^{R_1\\times R_2\\times R_3}$:\n",
    "\n",
    "$$\\min_{\\mathcal{G}}\\sum_{(i,j,t)\\in\\Omega}\\left(y_{ijt}-\\sum_{r_1=1}^{R_1}\\sum_{r_2=1}^{R_2}\\sum_{r_3=1}^{R_3}g_{r_1r_2r_3}u_{ir_1}v_{jr_2}x_{tr_3}\\right)^2,$$\n",
    "\n",
    "$$\\Rightarrow\\min_{\\mathcal{G}}\\sum_{(i,j,t)\\in\\Omega}\\left(y_{ijt}-\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\text{vec}\\left(\\mathcal{G}\\right)\\right)^2,$$\n",
    "\n",
    "$$\\Rightarrow\\min_{\\mathcal{G}}\\sum_{(i,j,t)\\in\\Omega}\\left(y_{ijt}-\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\text{vec}\\left(\\mathcal{G}\\right)\\right)^\\top\\left(y_{ijt}-\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\text{vec}\\left(\\mathcal{G}\\right)\\right),$$\n",
    "where the symbol $\\text{vec}\\left(\\cdot\\right)$ denotes vectorization on matrix (or tensor).\n",
    "\n",
    "---\n",
    "**Theorem 1**: Suppose $d$-th order tensor $\\mathcal{G}\\in\\mathbb{R}^{n_1\\times n_2\\times\\cdots\\times n_d}$ and matrices $U_{k}\\in\\mathbb{R}^{m_k\\times n_k}$ for $k=1,2,...,d$. If the tensor $\\mathcal{A}\\in\\mathbb{R}^{m_1\\times m_2\\times\\cdots\\times m_d}$ is the multi-linear product\n",
    "\n",
    "$$\\mathcal{A}=\\mathcal{G}\\times_1 U_1\\times_2 U_2\\times_3\\cdots\\times_d U_d,$$\n",
    "then\n",
    "\n",
    "$$\\mathcal{A}_{(k)}=U_{k}\\mathcal{G}_{(k)}\\left(U_d\\otimes\\cdots\\otimes U_{k+1}\\otimes U_{k-1}\\otimes\\cdots\\otimes U_1\\right)^\\top,$$\n",
    "and\n",
    "\n",
    "$$\\text{vec}\\left(\\mathcal{A}\\right)=\\left(U_d\\otimes\\cdots\\otimes U_2\\otimes U_1\\right)\\text{vec}\\left(\\mathcal{G}\\right).$$\n",
    "\n",
    "If $U_1,U_2,...,U_d$ are all non-singluar, then $\\mathcal{G}=\\mathcal{A}\\times_1U_1^{-1}\\times_2U_2^{-1}\\times_3\\cdots\\times_dU_d^{-1}$.\n",
    "\n",
    "**Reference**: Gene H. Golub, Charles F. Van Loan, 2012. Matrix Computations (4th Edition). (page: 728-729)\n",
    "\n",
    "---\n",
    "\n",
    "Obviously, the solution for updating core tensor $\\mathcal{G}$ is\n",
    "\n",
    "$$\\text{vec}\\left(\\mathcal{G}\\right)\\Leftarrow\\left(\\sum_{(i,j,t)\\in\\Omega}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\right)^{-1}\\left(\\sum_{(i,j,t)\\in\\Omega}y_{ijt}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\right).$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Similar to core tensor $\\mathcal{G}$, the optimization problem for factor matrix $U\\in\\mathbb{R}^{M\\times R_1}$ can be written as follows,\n",
    "\n",
    "$$\\min_{U}\\sum_{(i,j,t)\\in\\Omega}\\left(y_{ijt}-\\sum_{r_1=1}^{R_1}\\sum_{r_2=1}^{R_2}\\sum_{r_3=1}^{R_3}g_{r_1r_2r_3}u_{ir_1}v_{jr_2}x_{tr_3}\\right)^2,$$\n",
    "and this optimzation can be decomposed into independent least square problems with $\\boldsymbol{u}_{i}\\in\\mathbb{R}^{R},i=1,2,...,M$:\n",
    "\n",
    "$$\\min_{\\boldsymbol{u}_i}\\sum_{j,t:(i,j,t)\\in\\Omega}\\left(y_{ijt}-\\boldsymbol{u}_i^\\top\\mathcal{G}_{(1)}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\right)\\left(y_{ijt}-\\boldsymbol{u}_i^\\top\\mathcal{G}_{(1)}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\right)^\\top.$$\n",
    "\n",
    "In such case, we could derive the least square as\n",
    "\n",
    "$$\\boldsymbol{u}_{i}\\Leftarrow\\left(\\sum_{j,t:(i,j,t)\\in\\Omega}\\mathcal{G}_{(1)}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)^\\top\\mathcal{G}_{(1)}^\\top\\right)^{-1}\\left(\\sum_{j,t:(i,j,t)\\in\\Omega}y_{ijt}\\mathcal{G}_{(1)}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\right),\\forall i\\in\\left\\{1,2,...,M\\right\\}.$$\n",
    "\n",
    "The alternating least squares for $V\\in\\mathbb{R}^{N\\times R_2}$ and $X\\in\\mathbb{R}^{T\\times R_3}$ are\n",
    "\n",
    "$$\\boldsymbol{v}_{j}\\Leftarrow\\left(\\sum_{i,t:(i,j,t)\\in\\Omega}\\mathcal{G}_{(2)}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\mathcal{G}_{(2)}^\\top\\right)^{-1}\\left(\\sum_{i,t:(i,j,t)\\in\\Omega}y_{ijt}\\mathcal{G}_{(2)}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)\\right),\\forall j\\in\\left\\{1,2,...,N\\right\\},$$\n",
    "\n",
    "$$\\boldsymbol{x}_{t}\\Leftarrow\\left(\\sum_{i,j:(i,j,t)\\in\\Omega}\\mathcal{G}_{(3)}\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\mathcal{G}_{(3)}^\\top\\right)^{-1}\\left(\\sum_{i,j:(i,j,t)\\in\\Omega}y_{ijt}\\mathcal{G}_{(3)}\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\right),\\forall t\\in\\left\\{1,2,...,T\\right\\}.$$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''Prerequisite functions:'''\n",
    "\n",
    "import numpy as np\n",
    "\n",
    "def kr_prod(a, b):\n",
    "    return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)\n",
    "\n",
    "def ten2mat(tensor, mode):\n",
    "    return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')\n",
    "\n",
    "def mat2ten(mat, tensor_size, mode):\n",
    "    index = list()\n",
    "    index.append(mode)\n",
    "    for i in range(tensor_size.shape[0]):\n",
    "        if i != mode:\n",
    "            index.append(i)\n",
    "    return np.moveaxis(np.reshape(mat, list(tensor_size[index]), order = 'F'), 0, mode)\n",
    "\n",
    "def mat2vec(mat):\n",
    "    dim1, dim2 = mat.shape\n",
    "    return mat.T.reshape([dim1 * dim2])\n",
    "\n",
    "def vec2mat(vec, mat_size):\n",
    "    return vec.reshape([mat_size[1], mat_size[0]]).T\n",
    "\n",
    "def Tucker_ALS(sparse_tensor, rank, maxiter):\n",
    "    dim1, dim2, dim3 = sparse_tensor.shape\n",
    "    rank1 = rank[0]\n",
    "    rank2 = rank[1]\n",
    "    rank3 = rank[2]\n",
    "    \n",
    "    G = 0.1 * np.random.rand(rank1, rank2, rank3)\n",
    "    U = 0.1 * np.random.rand(dim1, rank1)\n",
    "    V = 0.1 * np.random.rand(dim2, rank2)\n",
    "    X = 0.1 * np.random.rand(dim3, rank3)\n",
    "    \n",
    "    pos = np.where(sparse_tensor != 0)\n",
    "    binary_tensor = np.zeros((dim1, dim2, dim3))\n",
    "    binary_tensor[pos] = 1\n",
    "    tensor_hat = np.zeros((dim1, dim2, dim3))\n",
    "    \n",
    "    for iters in range(maxiter):\n",
    "        small_mat = np.zeros((rank1 * rank2 * rank3, rank1 * rank2 * rank3))\n",
    "        small_vec = np.zeros((rank1 * rank2 * rank3))\n",
    "        for ind in range(pos[0].shape[0]):\n",
    "            vec0 = kr_prod(kr_prod(X[pos[2][ind], :].reshape([rank3, 1]), \n",
    "                                   V[pos[1][ind], :].reshape([rank2, 1])), \n",
    "                           U[pos[0][ind], :].reshape([rank1, 1]))\n",
    "            vec0 = vec0.reshape([rank1 * rank2 * rank3])\n",
    "            small_mat += np.outer(vec0, vec0)\n",
    "            small_vec += sparse_tensor[pos[0][ind], pos[1][ind], pos[2][ind]] * vec0\n",
    "        small_mat = (small_mat + small_mat.T)/2\n",
    "        G_vec = np.matmul(np.linalg.inv(small_mat), small_vec)\n",
    "        G = mat2ten(vec2mat(G_vec, np.array([rank1, rank2 * rank3])), np.array([rank1, rank2, rank3]), 0)\n",
    "\n",
    "        G1 = ten2mat(G, 0)\n",
    "        var1 = np.matmul(G1, np.kron(X, V).T)\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 0).T).reshape([rank1, rank1, dim1])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 0).T)\n",
    "        for i in range(dim1):\n",
    "            var_Lambda = var3[ :, :, i]\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            U[i, :] = np.matmul(inv_var_Lambda, var4[:, i])\n",
    "\n",
    "        G2 = ten2mat(G, 1)\n",
    "        var1 = np.matmul(G2, np.kron(X, U).T)\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 1).T).reshape([rank2, rank2, dim2])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 1).T)\n",
    "        for j in range(dim2):\n",
    "            var_Lambda = var3[ :, :, j]\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            V[j, :] = np.matmul(inv_var_Lambda, var4[:, j])\n",
    "\n",
    "        G3 = ten2mat(G, 2)\n",
    "        var1 = np.matmul(G3, np.kron(V, U).T)\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 2).T).reshape([rank3, rank3, dim3])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 2).T)\n",
    "        for t in range(dim3):\n",
    "            var_Lambda = var3[ :, :, t]\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            X[t, :] = np.matmul(inv_var_Lambda, var4[:, t])\n",
    "\n",
    "        tensor_hat = tucker_combine(G, U, V, X)\n",
    "        mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0]\n",
    "        rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0])\n",
    "        \n",
    "        if (iters + 1) % 5 == 0:\n",
    "            print('Iter: {}'.format(iters + 1))\n",
    "            print('Training MAPE: {:.6}'.format(mape))\n",
    "            print('Training RMSE: {:.6}'.format(rmse))\n",
    "            print()\n",
    "    \n",
    "    return tensor_hat, G, U, V, X"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- **Example: Traffic Volume Data Imputation**\n",
    "\n",
    "Given a small traffic volume dataset with form of third-order tensor $\\mathcal{X}\\in\\mathbb{R}^{7\\times 4\\times 3}$, i.e.,\n",
    "\n",
    "$$\\mathcal{X}(:, :, 1)=\\left[\\begin{array}{cccc}{155} & {74} & {493} & {426} \\\\ {108} & {44} & {350} & {359} \\\\ {175} & {78} & {567} & {581} \\\\ {181} & {111} & {517} & {552} \\\\ {137} & {53} & {489} & {485} \\\\ {90} & {44} & {306} & {290} \\\\ {139} & {55} & {398} & {390}\\end{array}\\right],\\mathcal{X}( :, :, 2)=\\left[\\begin{array}{cccc}{172} & {69} & {590} & {386} \\\\ {104} & {39} & {310} & {304} \\\\ {158} & {74} & {505} & {546} \\\\ {176} & {90} & {525} & {552} \\\\ {150} & {64} & {438} & {459} \\\\ {73} & {32} & {281} & {299} \\\\ {127} & {51} & {358} & {382}\\end{array}\\right],\\mathcal{X}( :, :, 3)=\\left[\\begin{array}{cccc}{225} & {92} & {443} & {436} \\\\ {94} & {44} & {355} & {356} \\\\ {139} & {77} & {575} & {604} \\\\ {175} & {98} & {574} & {553} \\\\ {126} & {67} & {593} & {484} \\\\ {58} & {49} & {348} & {301} \\\\ {144} & {71} & {444} & {396}\\end{array}\\right],$$\n",
    "where 7 indicates 7 spatial locations (or sensors), 4 indicates 4 days, and 3 indicates 3 15-minute time intervals. The unit of tensor entries is vehicle per 15 min."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Suppose an incomplete tensor $\\mathcal{Y}$:\n",
    "\n",
    "$$\\mathcal{X}(:, :, 1)=\\left[\\begin{array}{cccc}{155} & {74} & {493} & {426} \\\\ {108} & {0} & {0} & {0} \\\\ {175} & {78} & {0} & {0} \\\\ {0} & {111} & {517} & {0} \\\\ {137} & {53} & {489} & {0} \\\\ {90} & {44} & {0} & {0} \\\\ {139} & {0} & {398} & {0}\\end{array}\\right],\\mathcal{X}( :, :, 2)=\\left[\\begin{array}{cccc}{172} & {69} & {590} & {0} \\\\ {104} & {0} & {310} & {304} \\\\ {0} & {0} & {505} & {546} \\\\ {0} & {90} & {525} & {552} \\\\ {0} & {64} & {0} & {0} \\\\ {73} & {32} & {281} & {299} \\\\ {127} & {0} & {0} & {0}\\end{array}\\right],\\mathcal{X}( :, :, 3)=\\left[\\begin{array}{cccc}{225} & {0} & {0} & {436} \\\\ {0} & {44} & {0} & {356} \\\\ {0} & {0} & {575} & {604} \\\\ {175} & {98} & {574} & {0} \\\\ {126} & {67} & {0} & {0} \\\\ {58} & {49} & {348} & {0} \\\\ {144} & {0} & {444} & {0}\\end{array}\\right],$$\n",
    "and the problem is to estimate the tensor entries filled with 0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "dim1 = 7\n",
    "dim2 = 4\n",
    "dim3 = 3\n",
    "dense_tensor = np.zeros((dim1, dim2, dim3))\n",
    "dense_tensor[:, :, 0] = np.array([[155, 74, 493, 426], [108, 44, 350, 359],\n",
    "                                  [175, 78, 567, 581], [181, 111, 517, 552],\n",
    "                                  [137, 53, 489, 485], [90, 44, 306, 290],\n",
    "                                  [139, 55, 398, 390]])\n",
    "dense_tensor[:, :, 1] = np.array([[172, 69, 590, 386], [104, 39, 310, 304],\n",
    "                                  [158, 74, 505, 546], [176, 90, 525, 552],\n",
    "                                  [150, 64, 438, 459], [73, 32, 281, 299],\n",
    "                                  [127, 51, 358, 382]])\n",
    "dense_tensor[:, :, 2] = np.array([[225, 92, 443, 436], [94, 44, 355, 356],\n",
    "                                  [139, 77, 575, 604], [175, 98, 574, 553],\n",
    "                                  [126, 67, 593, 484], [58, 49, 348, 301],\n",
    "                                  [144, 71, 444, 396]])\n",
    "sparse_tensor = np.zeros((dim1, dim2, dim3))\n",
    "sparse_tensor[:, :, 0] = np.array([[155, 74, 493, 426], [108, 0, 0, 0],\n",
    "                                   [175, 78, 0, 0], [0, 111, 517, 0],\n",
    "                                   [137, 53, 489, 0], [90, 44, 0, 0],\n",
    "                                   [139, 0, 398, 0]])\n",
    "sparse_tensor[:, :, 1] = np.array([[172, 69, 590, 0], [104, 0, 310, 304],\n",
    "                                   [0, 0, 505, 546], [0, 90, 525, 552],\n",
    "                                   [0, 64, 0, 0], [73, 32, 281, 299],\n",
    "                                   [127, 0, 0, 0]])\n",
    "sparse_tensor[:, :, 2] = np.array([[225, 0, 0, 436], [0, 44, 0, 356],\n",
    "                                   [0, 0, 575, 604], [175, 98, 574, 0],\n",
    "                                   [126, 67, 0, 0], [58, 49, 348, 0],\n",
    "                                   [144, 0, 444, 0]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iter: 5\n",
      "Training MAPE: 0.104804\n",
      "Training RMSE: 25.7809\n",
      "\n",
      "Iter: 10\n",
      "Training MAPE: 0.105464\n",
      "Training RMSE: 25.7619\n",
      "\n",
      "Iter: 15\n",
      "Training MAPE: 0.105465\n",
      "Training RMSE: 25.7619\n",
      "\n",
      "Iter: 20\n",
      "Training MAPE: 0.105465\n",
      "Training RMSE: 25.7619\n",
      "\n",
      "Final Imputation MAPE: 0.109442\n",
      "Final Imputation RMSE: 40.2588\n",
      "\n"
     ]
    }
   ],
   "source": [
    "rank1 = 1\n",
    "rank2 = 1\n",
    "rank3 = 1\n",
    "rank = np.array([rank1, rank2, rank3])\n",
    "maxiter = 20\n",
    "tensor_hat, G, U, V, X = Tucker_ALS(sparse_tensor, rank, maxiter)\n",
    "pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\n",
    "final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\n",
    "final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\n",
    "print('Final Imputation MAPE: {:.6}'.format(final_mape))\n",
    "print('Final Imputation RMSE: {:.6}'.format(final_rmse))\n",
    "print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Core tensor:\n",
      "[[[1236813.4422681]]]\n",
      "\n",
      "Factor matrix U:\n",
      "[[0.20816467]\n",
      " [0.13368968]\n",
      " [0.22668262]\n",
      " [0.22372434]\n",
      " [0.20096008]\n",
      " [0.12273073]\n",
      " [0.17260962]]\n",
      "\n",
      "Factor matrix V:\n",
      "[[0.01239407]\n",
      " [0.00588805]\n",
      " [0.0392647 ]\n",
      " [0.03751036]]\n",
      "\n",
      "Factor matrix X:\n",
      "[[0.04783521]\n",
      " [0.05071082]\n",
      " [0.05295695]]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print('Core tensor:')\n",
    "print(G)\n",
    "print()\n",
    "print('Factor matrix U:')\n",
    "print(U)\n",
    "print()\n",
    "print('Factor matrix V:')\n",
    "print(V)\n",
    "print()\n",
    "print('Factor matrix X:')\n",
    "print(X)\n",
    "print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "From the factorized results, we could observe that core tensor $\\mathcal{G}\\in\\mathbb{R}$ is an extreme large value. In that case, Tucker factorization suffers from the overfitting issue. Therefore, we would like to use the $\\ell_{1}$ (or $\\ell_{2}$) norm in the following to improve robustness against data outliers.\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.2.2 CP Factorization using ALS\n",
    "\n",
    "Indeed, the formula of CP factorization is a special case of Tucker factorization. Mathematically, for any $(i,j,t)$-th entry of a given third-order tensor $\\mathcal{Y}$, the form of CP factorization can be written as\n",
    "\n",
    "$$y_{ijt}\\approx\\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr}=\\sum_{r_1=1}^{R}\\sum_{r_2=1}^{R}\\sum_{r_3=1}^{R}g_{r_1r_2r_3}u_{ir_1}v_{jr_2}x_{jr_3},$$\n",
    "where hyper-diagonal entries of the core tensor $\\mathcal{G}$ are 1. In other words, $g_{r_1r_2r_3}=1$ for any $r_1=r_2=r_3$ and $g_{r_1r_2r_3}=0$ otherwise.\n",
    "\n",
    "Regarding CP factorization as a machine learning problem, we could perform a learning task by minimizing the loss function over factor matrices like aforementioned Tucker factorization, that is,\n",
    "\n",
    "$$\\min _{U, V, X} \\sum_{(i, j, t) \\in \\Omega}\\left(y_{i j t}-\\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr}\\right)^{2}.$$\n",
    "\n",
    "Within this optimization problem, multiplication among three factor matrices (acted as parameters) makes this problem difficult. Therefore, following the aforementioned Tucker factorization scheme, we apply the ALS algorithm for CP factorization.\n",
    "\n",
    "In particular, the optimization problem for each row $\\boldsymbol{u}_{i}\\in\\mathbb{R}^{R},\\forall i\\in\\left\\{1,2,...,M\\right\\}$ of factor matrix $U\\in\\mathbb{R}^{M\\times R}$ is given by\n",
    "\n",
    "$$\\min _{\\boldsymbol{u}_{i}} \\sum_{j,t:(i, j, t) \\in \\Omega}\\left[y_{i j t}-\\boldsymbol{u}_{i}^\\top\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\right]\\left[y_{i j t}-\\boldsymbol{u}_{i}^\\top\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\right)\\right]^\\top.$$\n",
    "\n",
    "The least square for this optimization is\n",
    "\n",
    "$$u_{i} \\Leftarrow\\left(\\sum_{j, t, i, j, t ) \\in \\Omega} \\left(x_{t} \\odot v_{j}\\right)\\left(x_{t} \\odot v_{j}\\right)^{\\top}\\right)^{-1}\\left(\\sum_{j, t :(i, j, t) \\in \\Omega} y_{i j t} \\left(x_{t} \\odot v_{j}\\right)\\right), \\forall i \\in\\{1,2, \\ldots, M\\}.$$\n",
    "\n",
    "The alternating least squares for $V\\in\\mathbb{R}^{N\\times R}$ and $X\\in\\mathbb{R}^{T\\times R}$ are\n",
    "\n",
    "$$\\boldsymbol{v}_{j}\\Leftarrow\\left(\\sum_{i,t:(i,j,t)\\in\\Omega}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\right)^{-1}\\left(\\sum_{i,t:(i,j,t)\\in\\Omega}y_{ijt}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{u}_{i}\\right)\\right),\\forall j\\in\\left\\{1,2,...,N\\right\\},$$\n",
    "\n",
    "$$\\boldsymbol{x}_{t}\\Leftarrow\\left(\\sum_{i,j:(i,j,t)\\in\\Omega}\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\right)^{-1}\\left(\\sum_{i,j:(i,j,t)\\in\\Omega}y_{ijt}\\left(\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\right),\\forall t\\in\\left\\{1,2,...,T\\right\\}.$$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "**Theorem 2**: Suppose matrices $U^{(k)}\\in\\mathbb{R}^{m_k\\times n}$ for $k=1,2,...,d$. If the tensor $\\mathcal{A}\\in\\mathbb{R}^{m_1\\times m_2\\times\\cdots\\times m_d}$ has such CP model\n",
    "\n",
    "$$\\mathcal{A}=\\sum_{r=1}^{n}\\boldsymbol{u}_{r}^{(1)}\\circ\\boldsymbol{u}_{r}^{(2)}\\circ\\cdots\\circ\\boldsymbol{u}_{r}^{(d)},$$\n",
    "then\n",
    "\n",
    "$$\\mathcal{A}_{(k)}=U^{(k)}\\left(U^{(d)}\\odot\\cdots\\odot U^{(k+1)}\\odot U^{(k-1)}\\odot\\cdots\\odot U^{(1)}\\right)^\\top.$$\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''Prerequisite functions:'''\n",
    "\n",
    "import numpy as np\n",
    "\n",
    "def kr_prod(a, b):\n",
    "    return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)\n",
    "\n",
    "def ten2mat(tensor, mode):\n",
    "    return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')\n",
    "\n",
    "def CP_ALS(sparse_tensor, rank, maxiter):\n",
    "    dim1, dim2, dim3 = sparse_tensor.shape\n",
    "    \n",
    "    U = 0.1 * np.random.rand(dim1, rank)\n",
    "    V = 0.1 * np.random.rand(dim2, rank)\n",
    "    X = 0.1 * np.random.rand(dim3, rank)\n",
    "    \n",
    "    pos = np.where(sparse_tensor != 0)\n",
    "    binary_tensor = np.zeros((dim1, dim2, dim3))\n",
    "    binary_tensor[pos] = 1\n",
    "    tensor_hat = np.zeros((dim1, dim2, dim3))\n",
    "    \n",
    "    for iters in range(maxiter):\n",
    "        var1 = kr_prod(X, V).T\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 0).T).reshape([rank, rank, dim1])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 0).T)\n",
    "        for i in range(dim1):\n",
    "            var_Lambda = var3[ :, :, i]\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            U[i, :] = np.matmul(inv_var_Lambda, var4[:, i])\n",
    "\n",
    "        var1 = kr_prod(X, U).T\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 1).T).reshape([rank, rank, dim2])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 1).T)\n",
    "        for j in range(dim2):\n",
    "            var_Lambda = var3[ :, :, j]\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            V[j, :] = np.matmul(inv_var_Lambda, var4[:, j])\n",
    "\n",
    "        var1 = kr_prod(V, U).T\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 2).T).reshape([rank, rank, dim3])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 2).T)\n",
    "        for t in range(dim3):\n",
    "            var_Lambda = var3[ :, :, t]\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            X[t, :] = np.matmul(inv_var_Lambda, var4[:, t])\n",
    "\n",
    "        tensor_hat = cp_combine(U, V, X)\n",
    "        mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0]\n",
    "        rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0])\n",
    "        \n",
    "        if (iters + 1) % 20 == 0:\n",
    "            print('Iter: {}'.format(iters + 1))\n",
    "            print('Training MAPE: {:.6}'.format(mape))\n",
    "            print('Training RMSE: {:.6}'.format(rmse))\n",
    "            print()\n",
    "    \n",
    "    return tensor_hat, U, V, X"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.3 $\\ell_{2}$ Norm Regularization\n",
    "\n",
    "Re-write the optimization problem over core tensor $\\mathcal{G}$:\n",
    "\n",
    "$$\\min_{\\mathcal{G}}\\sum_{(i,j,t)\\in\\Omega}\\left(y_{ijt}-\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\text{vec}\\left(\\mathcal{G}\\right)\\right)^\\top\\left(y_{ijt}-\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top\\text{vec}\\left(\\mathcal{G}\\right)\\right)+\\lambda_g\\text{vec}\\left(\\mathcal{G}\\right)^\\top\\text{vec}\\left(\\mathcal{G}\\right).$$\n",
    "\n",
    "Thus, we have\n",
    "\n",
    "$$\\text{vec}\\left(\\mathcal{G}\\right)\\Leftarrow\\left(\\sum_{(i,j,t)\\in\\Omega}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)^\\top+\\lambda_gI\\right)^{-1}\\left(\\sum_{(i,j,t)\\in\\Omega}y_{ijt}\\left(\\boldsymbol{x}_{t}\\odot\\boldsymbol{v}_{j}\\odot\\boldsymbol{u}_{i}\\right)\\right).$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''Prerequisite functions:'''\n",
    "\n",
    "import numpy as np\n",
    "\n",
    "def kr_prod(a, b):\n",
    "    return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)\n",
    "\n",
    "def ten2mat(tensor, mode):\n",
    "    return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')\n",
    "\n",
    "def mat2ten(mat, tensor_size, mode):\n",
    "    index = list()\n",
    "    index.append(mode)\n",
    "    for i in range(tensor_size.shape[0]):\n",
    "        if i != mode:\n",
    "            index.append(i)\n",
    "    return np.moveaxis(np.reshape(mat, list(tensor_size[index]), order = 'F'), 0, mode)\n",
    "\n",
    "def mat2vec(mat):\n",
    "    dim1, dim2 = mat.shape\n",
    "    return mat.T.reshape([dim1 * dim2])\n",
    "\n",
    "def vec2mat(vec, mat_size):\n",
    "    return vec.reshape([mat_size[1], mat_size[0]]).T\n",
    "\n",
    "def Tucker_ALS_L2(sparse_tensor, rank, hyper_lambda, maxiter):\n",
    "    dim1, dim2, dim3 = sparse_tensor.shape\n",
    "    rank1 = rank[0]\n",
    "    rank2 = rank[1]\n",
    "    rank3 = rank[2]\n",
    "    \n",
    "    G = 0.1 * np.random.rand(rank1, rank2, rank3)\n",
    "    U = 0.1 * np.random.rand(dim1, rank1)\n",
    "    V = 0.1 * np.random.rand(dim2, rank2)\n",
    "    X = 0.1 * np.random.rand(dim3, rank3)\n",
    "    \n",
    "    pos = np.where(sparse_tensor != 0)\n",
    "    binary_tensor = np.zeros((dim1, dim2, dim3))\n",
    "    binary_tensor[pos] = 1\n",
    "    tensor_hat = np.zeros((dim1, dim2, dim3))\n",
    "    \n",
    "    for iters in range(maxiter):\n",
    "        small_mat = np.zeros((rank1 * rank2 * rank3, rank1 * rank2 * rank3))\n",
    "        small_vec = np.zeros((rank1 * rank2 * rank3))\n",
    "        for ind in range(pos[0].shape[0]):\n",
    "            vec0 = kr_prod(kr_prod(X[pos[2][ind], :].reshape([rank3, 1]), \n",
    "                                   V[pos[1][ind], :].reshape([rank2, 1])), \n",
    "                           U[pos[0][ind], :].reshape([rank1, 1]))\n",
    "            vec0 = vec0.reshape([rank1 * rank2 * rank3])\n",
    "            small_mat += np.outer(vec0, vec0)\n",
    "            small_vec += sparse_tensor[pos[0][ind], pos[1][ind], pos[2][ind]] * vec0\n",
    "        small_mat += hyper_lambda[0] * np.eye(r1 * r2 * r3)\n",
    "        small_mat = (small_mat + small_mat.T)/2\n",
    "        G_vec = np.matmul(np.linalg.inv(small_mat), small_vec)\n",
    "        G = mat2ten(vec2mat(G_vec, np.array([rank1, rank2 * rank3])), np.array([rank1, rank2, rank3]), 0)\n",
    "\n",
    "        G1 = ten2mat(G, 0)\n",
    "        var1 = np.matmul(G1, np.kron(X, V).T)\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 0).T).reshape([rank1, rank1, dim1])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 0).T)\n",
    "        for i in range(dim1):\n",
    "            var_Lambda = var3[ :, :, i] + hyper_lambda[1] * np.eye(rank1)\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            U[i, :] = np.matmul(inv_var_Lambda, var4[:, i] + hyper_lambda[1])\n",
    "\n",
    "        G2 = ten2mat(G, 1)\n",
    "        var1 = np.matmul(G2, np.kron(X, U).T)\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 1).T).reshape([rank2, rank2, dim2])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 1).T)\n",
    "        for j in range(dim2):\n",
    "            var_Lambda = var3[ :, :, j] + hyper_lambda[2] * np.eye(rank2)\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            V[j, :] = np.matmul(inv_var_Lambda, var4[:, j] + hyper_lambda[2])\n",
    "\n",
    "        G3 = ten2mat(G, 2)\n",
    "        var1 = np.matmul(G3, np.kron(V, U).T)\n",
    "        var2 = kr_prod(var1, var1)\n",
    "        var3 = np.matmul(var2, ten2mat(binary_tensor, 2).T).reshape([rank3, rank3, dim3])\n",
    "        var4 = np.matmul(var1, ten2mat(sparse_tensor, 2).T)\n",
    "        for t in range(dim3):\n",
    "            var_Lambda = var3[ :, :, t] + hyper_lambda[3] * np.eye(rank3)\n",
    "            inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)\n",
    "            X[t, :] = np.matmul(inv_var_Lambda, var4[:, t] + hyper_lambda[3])\n",
    "\n",
    "        tensor_hat = tucker_combine(G, U, V, X)\n",
    "        mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0]\n",
    "        rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0])\n",
    "        \n",
    "        if (iters + 1) % 5 == 0:\n",
    "            print('Iter: {}'.format(iters + 1))\n",
    "            print('Training MAPE: {:.6}'.format(mape))\n",
    "            print('Training RMSE: {:.6}'.format(rmse))\n",
    "            print()\n",
    "    \n",
    "    return tensor_hat, G, U, V, X"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1.4 Spatiotemporal Data Imputation\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Appendix A1: Tucker Factorization on the Guangzhou Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "import scipy.io\n",
    "\n",
    "tensor = scipy.io.loadmat('../Guangzhou-data-set/tensor.mat')\n",
    "dense_tensor = tensor['tensor']\n",
    "random_matrix = scipy.io.loadmat('../Guangzhou-data-set/random_matrix.mat')\n",
    "random_matrix = random_matrix['random_matrix']\n",
    "random_tensor = scipy.io.loadmat('../Guangzhou-data-set/random_tensor.mat')\n",
    "random_tensor = random_tensor['random_tensor']\n",
    "\n",
    "missing_rate = 0.4\n",
    "\n",
    "# =============================================================================\n",
    "### Random missing (RM) scenario\n",
    "### Set the RM scenario by:\n",
    "# binary_tensor = np.round(random_tensor + 0.5 - missing_rate)\n",
    "# =============================================================================\n",
    "\n",
    "# =============================================================================\n",
    "### Non-random missing (NM) scenario\n",
    "### Set the NM scenario by:\n",
    "binary_tensor = np.zeros(dense_tensor.shape)\n",
    "for i1 in range(dense_tensor.shape[0]):\n",
    "    for i2 in range(dense_tensor.shape[1]):\n",
    "        binary_tensor[i1,i2,:] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)\n",
    "# =============================================================================\n",
    "\n",
    "sparse_tensor = np.multiply(dense_tensor, binary_tensor)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iter: 5\n",
      "Training MAPE: 0.10782\n",
      "Training RMSE: 4.45378\n",
      "\n",
      "Iter: 10\n",
      "Training MAPE: 0.107373\n",
      "Training RMSE: 4.43722\n",
      "\n",
      "Iter: 15\n",
      "Training MAPE: 0.107354\n",
      "Training RMSE: 4.43592\n",
      "\n",
      "Iter: 20\n",
      "Training MAPE: 0.107333\n",
      "Training RMSE: 4.43546\n",
      "\n",
      "Final Imputation MAPE: 0.106999\n",
      "Final Imputation RMSE: 4.45675\n",
      "\n",
      "Running time: 1944 seconds.\n"
     ]
    }
   ],
   "source": [
    "import time\n",
    "start = time.time()\n",
    "\n",
    "rank1 = 6\n",
    "rank2 = 6\n",
    "rank3 = 6\n",
    "rank = np.array([rank1, rank2, rank3])\n",
    "maxiter = 20\n",
    "tensor_hat, G, U, V, X = Tucker_ALS(sparse_tensor, rank, maxiter)\n",
    "pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\n",
    "final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\n",
    "final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\n",
    "print('Final Imputation MAPE: {:.6}'.format(final_mape))\n",
    "print('Final Imputation RMSE: {:.6}'.format(final_rmse))\n",
    "print()\n",
    "\n",
    "end = time.time()\n",
    "print('Running time: %d seconds.'%(end - start))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Table 1**: Summarized results of Tucker factorization on missing data imputation (Guanghzou dataset).\n",
    "\n",
    "|  scenario |    `rank`| `maxiter`|       mape |      rmse | running time |\n",
    "|:----------|---------:|---------:|-----------:|----------:|-------------:|\n",
    "|**40%, NM**|   (2,2,2)|       20 |     0.1319 |    5.2762 |     393 sec. |\n",
    "|**40%, NM**|   (3,3,3)|       20 |     0.1210 |    4.8816 |     460 sec. |\n",
    "|**40%, NM**|   (4,4,4)|       20 |     0.1129 |    4.6304 |     609 sec. |\n",
    "|**40%, NM**|   (5,5,5)|       20 |     0.1095 |    4.5260 |    1041 sec. |\n",
    "|**40%, NM**|   (6,6,6)|       20 |     0.1070 |    4.4568 |    1944 sec. |\n",
    "|**40%, NM**|   (7,7,7)|       20 |     0.1046 |    4.3955 |     sec. |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Appendix A2: Tucker Factorization with Regularization on the Guangzhou Dataset\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "import scipy.io\n",
    "\n",
    "tensor = scipy.io.loadmat('../Guangzhou-data-set/tensor.mat')\n",
    "dense_tensor = tensor['tensor']\n",
    "random_matrix = scipy.io.loadmat('../Guangzhou-data-set/random_matrix.mat')\n",
    "random_matrix = random_matrix['random_matrix']\n",
    "random_tensor = scipy.io.loadmat('../Guangzhou-data-set/random_tensor.mat')\n",
    "random_tensor = random_tensor['random_tensor']\n",
    "\n",
    "missing_rate = 0.4\n",
    "\n",
    "# =============================================================================\n",
    "### Random missing (RM) scenario\n",
    "### Set the RM scenario by:\n",
    "# binary_tensor = np.round(random_tensor + 0.5 - missing_rate)\n",
    "# =============================================================================\n",
    "\n",
    "# =============================================================================\n",
    "### Non-random missing (NM) scenario\n",
    "### Set the NM scenario by:\n",
    "binary_tensor = np.zeros(dense_tensor.shape)\n",
    "for i1 in range(dense_tensor.shape[0]):\n",
    "    for i2 in range(dense_tensor.shape[1]):\n",
    "        binary_tensor[i1,i2,:] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)\n",
    "# =============================================================================\n",
    "\n",
    "sparse_tensor = np.multiply(dense_tensor, binary_tensor)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iter: 5\n",
      "Training MAPE: 0.136286\n",
      "Training RMSE: 5.45497\n",
      "\n",
      "Iter: 10\n",
      "Training MAPE: 0.134713\n",
      "Training RMSE: 5.3794\n",
      "\n",
      "Iter: 15\n",
      "Training MAPE: 0.134585\n",
      "Training RMSE: 5.37518\n",
      "\n",
      "Iter: 20\n",
      "Training MAPE: 0.134518\n",
      "Training RMSE: 5.37364\n",
      "\n",
      "Final Imputation MAPE: 0.131941\n",
      "Final Imputation RMSE: 5.27799\n",
      "\n",
      "Running time: 429 seconds.\n"
     ]
    }
   ],
   "source": [
    "import time\n",
    "start = time.time()\n",
    "\n",
    "rank1 = 2\n",
    "rank2 = 2\n",
    "rank3 = 2\n",
    "rank = np.array([rank1, rank2, rank3])\n",
    "lambda_G = 1e+2\n",
    "lambda_U = 2e+1\n",
    "lambda_V = 1e-0\n",
    "lambda_X = 1e-0\n",
    "hyper_lambda = np.array([lambda_G, lambda_U, lambda_V, lambda_X])\n",
    "maxiter = 20\n",
    "tensor_hat, G, U, V, X = Tucker_ALS_L2(sparse_tensor, rank, hyper_lambda, maxiter)\n",
    "pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\n",
    "final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\n",
    "final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\n",
    "print('Final Imputation MAPE: {:.6}'.format(final_mape))\n",
    "print('Final Imputation RMSE: {:.6}'.format(final_rmse))\n",
    "print()\n",
    "\n",
    "end = time.time()\n",
    "print('Running time: %d seconds.'%(end - start))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Appendix B1: CP Factorization on the Guangzhou Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "import scipy.io\n",
    "\n",
    "tensor = scipy.io.loadmat('../Guangzhou-data-set/tensor.mat')\n",
    "dense_tensor = tensor['tensor']\n",
    "random_matrix = scipy.io.loadmat('../Guangzhou-data-set/random_matrix.mat')\n",
    "random_matrix = random_matrix['random_matrix']\n",
    "random_tensor = scipy.io.loadmat('../Guangzhou-data-set/random_tensor.mat')\n",
    "random_tensor = random_tensor['random_tensor']\n",
    "\n",
    "missing_rate = 0.4\n",
    "\n",
    "# =============================================================================\n",
    "### Random missing (RM) scenario\n",
    "### Set the RM scenario by:\n",
    "# binary_tensor = np.round(random_tensor + 0.5 - missing_rate)\n",
    "# =============================================================================\n",
    "\n",
    "# =============================================================================\n",
    "### Non-random missing (NM) scenario\n",
    "### Set the NM scenario by:\n",
    "binary_tensor = np.zeros(dense_tensor.shape)\n",
    "for i1 in range(dense_tensor.shape[0]):\n",
    "    for i2 in range(dense_tensor.shape[1]):\n",
    "        binary_tensor[i1,i2,:] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)\n",
    "# =============================================================================\n",
    "\n",
    "sparse_tensor = np.multiply(dense_tensor, binary_tensor)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iter: 20\n",
      "Training MAPE: 0.104495\n",
      "Training RMSE: 4.34865\n",
      "\n",
      "Iter: 40\n",
      "Training MAPE: 0.103633\n",
      "Training RMSE: 4.32009\n",
      "\n",
      "Iter: 60\n",
      "Training MAPE: 0.103208\n",
      "Training RMSE: 4.30102\n",
      "\n",
      "Iter: 80\n",
      "Training MAPE: 0.102848\n",
      "Training RMSE: 4.28226\n",
      "\n",
      "Iter: 100\n",
      "Training MAPE: 0.102578\n",
      "Training RMSE: 4.27393\n",
      "\n",
      "Final Imputation MAPE: 0.10446\n",
      "Final Imputation RMSE: 4.37996\n",
      "\n",
      "Running time: 19 seconds.\n"
     ]
    }
   ],
   "source": [
    "import time\n",
    "start = time.time()\n",
    "\n",
    "rank = 10\n",
    "maxiter = 100\n",
    "tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)\n",
    "pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))\n",
    "final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]\n",
    "final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])\n",
    "print('Final Imputation MAPE: {:.6}'.format(final_mape))\n",
    "print('Final Imputation RMSE: {:.6}'.format(final_rmse))\n",
    "print()\n",
    "\n",
    "end = time.time()\n",
    "print('Running time: %d seconds.'%(end - start))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Table 2**: Summarized results of CP factorization on missing data imputation (Guanghzou dataset).\n",
    "\n",
    "|  scenario |    `rank`| `maxiter`|       mape |      rmse | running time |\n",
    "|:----------|---------:|---------:|-----------:|----------:|-------------:|\n",
    "|**40%, NM**|        2 |       20 |     0.1323 |    5.2920 |      14 sec. |\n",
    "|**40%, NM**|        3 |      100 |     0.1226 |    4.9319 |      13 sec. |\n",
    "|**40%, NM**|        4 |      100 |     0.1154 |    4.7171 |      14 sec. |\n",
    "|**40%, NM**|        5 |      100 |     0.1120 |    4.6026 |      15 sec. |\n",
    "|**40%, NM**|        6 |      100 |     0.1094 |    4.5436 |      15 sec. |\n",
    "|**40%, NM**|        7 |      100 |     0.1076 |    4.4770 |      15 sec. |\n",
    "|**40%, NM**|       10 |      100 |     0.1045 |    4.3800 |      19 sec. |\n",
    "|**40%, NM**|       15 |      100 |     0.1011 |    4.2905 |      21 sec. |\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
