{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "ncf_xshards_pandas.ipynb",
      "provenance": [],
      "authorship_tag": "ABX9TyOM7ajYkrJiW2lmSTA1lvws",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel-analytics/BigDL/blob/main/python/orca/colab-notebook/quickstart/ncf_xshards_pandas.ipynb)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ctQfvcg3zVwO"
      },
      "source": [
        "\r\n",
        "![image.png]()\n",
        "---"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-GW5WAqASSYH"
      },
      "source": [
        "## **Environment Preparation**"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zpi0yh_mzeJt"
      },
      "source": [
        "**Install Java 8**\r\n",
        "\r\n",
        "Run the cell on the **Google Colab** to install jdk 1.8.\r\n",
        "\r\n",
        "**Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BCGEV3WdSxfk"
      },
      "source": [
        "# Install jdk8\r\n",
        "!apt-get install openjdk-8-jdk-headless -qq > /dev/null\r\n",
        "import os\r\n",
        "# Set environment variable JAVA_HOME.\r\n",
        "os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\"\r\n",
        "!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java\r\n",
        "!java -version"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9AYiL0wdzlbr"
      },
      "source": [
        "**Install BigDL Orca**\r\n",
        "\r\n",
        "You can install the latest release version or latest pre-release version using `pip install --pre --upgrade bigdl-orca`. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "r1nWc6sIS22L"
      },
      "source": [
        "# Install latest release version of bigdl \r\n",
        "# Installing bigdl from pip will automatically install pyspark, bigdl, and their dependencies.\r\n",
        "!pip install --pre --upgrade bigdl-orca"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "WR6mnxkJS3nd"
      },
      "source": [
        "# Install python dependencies\r\n",
        "!pip install tensorflow==1.15.0"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9fUm8BXjS-R4"
      },
      "source": [
        "## **Data-Parallel Pandas with XShards for Distributed Deep Learning** \r\n",
        "\r\n",
        "In this guide we will describe how to use `XShards` in Orca to process large-scale dataset using existing Pyhton codes in a distributed and data-parallel fashion."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TbhU3BPJzyCu"
      },
      "source": [
        "#### **Intialization** \r\n",
        "\r\n",
        "import necessary libraries"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2H0U2Jd5z4tC"
      },
      "source": [
        "import os\r\n",
        "import zipfile\r\n",
        "import argparse\r\n",
        "\r\n",
        "import numpy as np\r\n",
        "import tensorflow as tf\r\n",
        "\r\n",
        "from bigdl.dllib.feature.dataset import base\r\n",
        "from sklearn.model_selection import train_test_split\r\n",
        "\r\n",
        "from bigdl.orca import init_orca_context, stop_orca_context\r\n",
        "from bigdl.orca import OrcaContext\r\n",
        "from bigdl.orca.learn.tf.estimator import Estimator\r\n",
        "from bigdl.orca.data import SharedValue\r\n",
        "import bigdl.orca.data.pandas"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mQjrQBwkz7s6"
      },
      "source": [
        "### **Init Orca Context** "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "P1SVUG3uz-EH"
      },
      "source": [
        "# recommended to set it to True when running BigDL in Jupyter notebook \r\n",
        "OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).\r\n",
        "\r\n",
        "cluster_mode = \"local\"\r\n",
        "\r\n",
        "if cluster_mode == \"local\":  \r\n",
        "    init_orca_context(cluster_mode=\"local\", cores=4) # run in local mode\r\n",
        "elif cluster_mode == \"yarn\":  \r\n",
        "    init_orca_context(cluster_mode=\"yarn-client\", num_nodes=2, cores=2, driver_memory=\"6g\") # run on Hadoop YARN cluster"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hK1Cla2h0BM8"
      },
      "source": [
        "### **Data Preprocessing with XShards**\r\n",
        "\r\n",
        "An XShards contains an automatically sharded (or partitioned) Python object (e.g., Pandas Dataframe, Numpy NDArray, Python Dictionary or List, etc.). Each partition of the XShards stores a subset of the Python object and is distributed across different nodes in the cluster; and the user may run arbitrary Python codes on each partition in a data-parallel fashion using `XShards.transform_shard`."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iX5YQVpUQ92i"
      },
      "source": [
        "# Download and extract movielens 1M data.\r\n",
        "url = 'http://files.grouplens.org/datasets/movielens/ml-1m.zip'\r\n",
        "local_file = base.maybe_download('ml-1m.zip', '.', url)\r\n",
        "if not os.path.exists('./ml-1m'):\r\n",
        "        zip_ref = zipfile.ZipFile(local_file, 'r')\r\n",
        "        zip_ref.extractall('.')\r\n",
        "        zip_ref.close()"
      ],
      "execution_count": 6,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1cYRAne6QklV"
      },
      "source": [
        "# Read in the dataset, and do a little preprocessing\r\n",
        "rating_files=\"./ml-1m/ratings.dat\"\r\n",
        "new_rating_files=\"./ml-1m/ratings_new.dat\"\r\n",
        "if not os.path.exists(new_rating_files):\r\n",
        "        fin = open(rating_files, \"rt\")\r\n",
        "        fout = open(new_rating_files, \"wt\")\r\n",
        "        for line in fin:\r\n",
        "            # replace :: to : for spark 2.4 support\r\n",
        "            fout.write(line.replace('::', ':'))\r\n",
        "        fin.close()\r\n",
        "        fout.close()"
      ],
      "execution_count": 7,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BUC8DXB0OrsZ"
      },
      "source": [
        "Read movive len csv to XShards of Pandas Dataframe."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-f2_JoVGSI1c"
      },
      "source": [
        "full_data = bigdl.orca.data.pandas.read_csv(new_rating_files, sep=':', header=None,\r\n",
        "                                          names=['user', 'item', 'label'], usecols=[0, 1, 2],\r\n",
        "                                          dtype={0: np.int32, 1: np.int32, 2: np.int32})\r\n",
        "user_set = set(full_data['user'].unique())\r\n",
        "item_set = set(full_data['item'].unique())\r\n",
        "\r\n",
        "min_user_id = min(user_set)\r\n",
        "max_user_id = max(user_set)\r\n",
        "min_item_id = min(item_set)\r\n",
        "max_item_id = max(item_set)\r\n",
        "print(min_user_id, max_user_id, min_item_id, max_item_id)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rmplUTretUry"
      },
      "source": [
        "Use `XShards` to process large-scale dataset with existing Pyhton codes in a distributed and data-parallel fashion. \r\n",
        "Run Python codes on each partition in a data-parallel fashion using `XShards.transform_shard`, as shown below. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NcVi7idySUdj"
      },
      "source": [
        "# update label starting from 0. That's because ratings go from 1 to 5, while the matrix columns go from 0 to 4\r\n",
        "def update_label(df):\r\n",
        "  df['label'] = df['label'] - 1\r\n",
        "  return df\r\n",
        "\r\n",
        "# run Python codes on each partition in a data-parallel fashion using `XShards.transform_shard`\r\n",
        "full_data = full_data.transform_shard(update_label)"
      ],
      "execution_count": 9,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HW7BCdAWoXGk",
        "outputId": "ed1bcce2-499b-4d22-fc25-c459c055d42f",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "source": [
        "# split to train/test dataset\r\n",
        "def split_train_test(data):\r\n",
        "  train, test = train_test_split(data, test_size=0.2, random_state=100)\r\n",
        "  return train, test\r\n",
        "\r\n",
        "train_data, test_data = full_data.transform_shard(split_train_test).split()"
      ],
      "execution_count": 10,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Hw5EI0VKSpg8"
      },
      "source": [
        "### **Define NCF Model**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MtmSkXcNS4jy"
      },
      "source": [
        "class NCF(object):\n",
        "    def __init__(self, embed_size, user_size, item_size):\n",
        "        self.user = tf.placeholder(dtype=tf.int32, shape=(None,))\n",
        "        self.item = tf.placeholder(dtype=tf.int32, shape=(None,))\n",
        "        self.label = tf.placeholder(dtype=tf.int32, shape=(None,))\n",
        "  \n",
        "        # GMF part starts\n",
        "        with tf.name_scope(\"GMF\"):\n",
        "            user_embed_GMF = tf.contrib.layers.embed_sequence(self.user, vocab_size=user_size + 1,\n",
        "                                                              embed_dim=embed_size)\n",
        "            item_embed_GMF = tf.contrib.layers.embed_sequence(self.item, vocab_size=item_size + 1,\n",
        "                                                              embed_dim=embed_size)\n",
        "            GMF = tf.multiply(user_embed_GMF, item_embed_GMF)\n",
        "\n",
        "        # MLP part starts\n",
        "        with tf.name_scope(\"MLP\"):\n",
        "            user_embed_MLP = tf.contrib.layers.embed_sequence(self.user, vocab_size=user_size + 1,\n",
        "                                                              embed_dim=embed_size)\n",
        "            item_embed_MLP = tf.contrib.layers.embed_sequence(self.item, vocab_size=item_size + 1,\n",
        "                                                              embed_dim=embed_size)\n",
        "            interaction = tf.concat([user_embed_MLP, item_embed_MLP], axis=-1)\n",
        "            layer1_MLP = tf.layers.dense(inputs=interaction, units=embed_size * 2)\n",
        "            layer1_MLP = tf.layers.dropout(layer1_MLP, rate=0.2)\n",
        "            layer2_MLP = tf.layers.dense(inputs=layer1_MLP, units=embed_size)\n",
        "            layer2_MLP = tf.layers.dropout(layer2_MLP, rate=0.2)\n",
        "            layer3_MLP = tf.layers.dense(inputs=layer2_MLP, units=embed_size // 2)\n",
        "            layer3_MLP = tf.layers.dropout(layer3_MLP, rate=0.2)\n",
        "\n",
        "        # Concate the two parts together\n",
        "        with tf.name_scope(\"concatenation\"):\n",
        "            concatenation = tf.concat([GMF, layer3_MLP], axis=-1)\n",
        "            self.logits = tf.layers.dense(inputs=concatenation, units=5)\n",
        "            self.logits_softmax = tf.nn.softmax(self.logits)\n",
        "            self.class_number = tf.argmax(self.logits_softmax, 1)\n",
        "\n",
        "        with tf.name_scope(\"loss\"):\n",
        "            self.loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(\n",
        "                labels=self.label, logits=self.logits, name='loss'))\n",
        "\n",
        "        with tf.name_scope(\"optimzation\"):\n",
        "            self.optim = tf.train.AdamOptimizer(1e-3, name='Adam')\n",
        "            self.optimizer = self.optim.minimize(self.loss)\n",
        "\n",
        "embedding_size=16\n",
        "model = NCF(embedding_size, max_user_id, max_item_id)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-JA-pL2tgrJg"
      },
      "source": [
        "### **Fit with Orca Estimator**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dYZRDHnFS8D2"
      },
      "source": [
        "batch_size=1280\r\n",
        "epochs=1\r\n",
        "model_dir='./'\r\n",
        "\r\n",
        "# create an Estimator.\r\n",
        "estimator = Estimator.from_graph(\r\n",
        "            inputs=[model.user, model.item],\r\n",
        "            outputs=[model.class_number],\r\n",
        "            labels=[model.label],\r\n",
        "            loss=model.loss,\r\n",
        "            optimizer=model.optim,\r\n",
        "            model_dir=model_dir,\r\n",
        "            metrics={\"loss\": model.loss})\r\n",
        "\r\n",
        "# fit the Estimator\r\n",
        "estimator.fit(data=train_data,\r\n",
        "              batch_size=1280,\r\n",
        "              epochs=1,\r\n",
        "              feature_cols=['user', 'item'],\r\n",
        "              label_cols=['label'],\r\n",
        "              validation_data=test_data)\r\n",
        "\r\n",
        "checkpoint_path = os.path.join(model_dir, \"NCF.ckpt\")\r\n",
        "estimator.save_tf_checkpoint(checkpoint_path)\r\n",
        "estimator.shutdown()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "T3Hw0PnVTCbR"
      },
      "source": [
        "# predict using the Estimator\r\n",
        "def predict(predict_data, user_size, item_size):\r\n",
        "\r\n",
        "    tf.reset_default_graph()\r\n",
        "\r\n",
        "    with tf.Session() as sess:\r\n",
        "        model = NCF(embedding_size, user_size, item_size)\r\n",
        "\r\n",
        "        saver = tf.train.Saver(tf.global_variables())\r\n",
        "        checkpoint_path = os.path.join(model_dir, \"NCF.ckpt\")\r\n",
        "        saver.restore(sess, checkpoint_path)\r\n",
        "\r\n",
        "        estimator = Estimator.from_graph(\r\n",
        "            inputs=[model.user, model.item],\r\n",
        "            outputs=[model.class_number],\r\n",
        "            sess=sess,\r\n",
        "            model_dir=model_dir\r\n",
        "        )\r\n",
        "        predict_result = estimator.predict(predict_data, feature_cols=['user', 'item'])\r\n",
        "        predictions = predict_result.collect()\r\n",
        "        assert 'prediction' in predictions[0]\r\n",
        "        print(predictions[0]['prediction'])"
      ],
      "execution_count": 15,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "l5y5GuGATOX8"
      },
      "source": [
        "predict(test_data, max_user_id, max_item_id)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "l1n5JVz7TRCb"
      },
      "source": [
        "stop_orca_context()"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}
