{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "h2q27gKz1H20"
      },
      "source": [
        "##### Copyright 2020 The TensorFlow Authors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "TUfAcER1oUS6"
      },
      "outputs": [],
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Gb7qyhNL1yWt"
      },
      "source": [
        "# On-device recommendation with TensorFlow Lite"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Fw5Y7snSuG51"
      },
      "source": [
        "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/lite/examples/recommendation/ml/ondevice_recommendation.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/lite/examples/recommendation/ml/ondevice_recommendation.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "\u003c/table\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fyYiyNxVp6mS"
      },
      "source": [
        "# Overview"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CShg7PXmqGUJ"
      },
      "source": [
        "This code base provides an adaptive framework to train and serve on-device recommendation\n",
        "model. This approach personalizes recommendations by leveraging on-device data,\n",
        "and protects user privacy without having user data leave device.\n",
        "\n",
        "This Notebook shows an end-to-end example that\n",
        "\n",
        "*   prepares sequential training data \n",
        "*   trains neural-network model with various encoding techniques\n",
        "*   exports the model to TensorFlow Lite\n",
        "*   integrates in on-device ML applications to generate personalized recommendations.\n",
        "\n",
        "With this example, we demonstrate the approach with public\n",
        "[movielens](https://grouplens.org/datasets/movielens/) dataset, but you could\n",
        "adapt the data processing script for your dataset and train your own\n",
        "recommendation model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bcLF2PKkSbV3"
      },
      "source": [
        "# Prerequisites\n",
        "\n",
        "To run this example, please clone the source code from github [repo](https://github.com/tensorflow/examples/tree/master/lite/examples/recommendation/ml), install required packages."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6cv3K3oaksJv"
      },
      "outputs": [],
      "source": [
        "!git clone https://github.com/tensorflow/examples\n",
        "%cd examples/lite/examples/recommendation/ml/\n",
        "!pip install -r requirements.txt"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qRBdzEu3qGFP"
      },
      "source": [
        "# Model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "m86-Nh4pMHqY"
      },
      "source": [
        "We leverage a dual-encoder model architecture, with context-encoder to encode\n",
        "sequential user history and label-encoder to encode predicted recommendation\n",
        "candidate. Similarity between context and label encodings is used to represent\n",
        "the likeliness predicted candidate meets user's needs.\n",
        "\n",
        "Three different sequential user history encoding techniques are provided with\n",
        "this code base:\n",
        "\n",
        "* **Bag of words encoder (BOW)**: averaging user activities' embeddings without\n",
        "considering context order.\n",
        "* **Convolutional neural-network encoder (CNN)**: applying multiple layers of\n",
        "convolutional neural-network to generate context encoding.\n",
        "* **Recurrent neural-network encoder (RNN)**: applying recurrent neural network\n",
        "(LSTM in this example) to understand context sequence.\n",
        "\n",
        "In terms of user sequence modeling, we consider there are two approaches in general:\n",
        "* **Id-based**: putting all recommendation candidates in an embedding space to understand similarities of items. The embeddings are keyed with item Ids in the item vocabulary. Hence we call it id-based approach.\n",
        "* **Feature-based**: use more features of user activities, not only the item ids. For instance, movie genre and movie rating for movie recommendation. The model will understand the features and learns the way to understand user.\n",
        "\n",
        "This framework code base supports both id-based and feature-based recommendation models, with a configurable way.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Oii_hG1843k_"
      },
      "source": [
        "![Untitled drawing (2).jpg]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jZpnkJ9X_RfA"
      },
      "source": [
        "# Data Adaptivity\n",
        "\n",
        "The framework can support model training or customization with various kinds of data, for which we provide the way to configurate the input data and encoder architecture.\n",
        "\n",
        "With the input config, you can specify the input features' information, such as data type, shapde, vocab, embedding dimension. You can also freely group features in a feature group to encode together, for instance movie_id and movie rating in above diagram.  We support 3 feature types INT/STRING/FLOAT, for string and integer categorical features we will map them in embedding spaces, for float feature we suggest to concatenate directly together with other features' embeddings.\n",
        "\n",
        "Please check out the step-to-step example session below for more details about setting up the input config."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5zCIjZgoSpNZ"
      },
      "source": [
        "# Training Data Preparation\n",
        "\n",
        "This notebook makes use of public dataset [movielens](https://grouplens.org/datasets/movielens/) to demonstrate training of on-device recommendation model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KpQlunkiSwio"
      },
      "source": [
        "## Examples Generation\n",
        "The examples generation process performs the following steps:\n",
        "\n",
        "\n",
        "*   Downloads [movielens](https://grouplens.org/datasets/movielens/) dataset\n",
        "*   Groups movie rating records by user, and orders per-user movie rating records by timestamp.\n",
        "*   Generates TensorFlow examples with features: 1) `context_movie_id`: \n",
        "time-ordered sequential movie IDs 2) `context_movie_rating`: time-ordered sequential rating numbers 3) `context_movie_genre`: time-ordered sequential movie genres 4) `context_movie_year`: time-ordered sequential movie years. 5) `label_movie_id`: the next movie ID user rated.\n",
        "\n",
        "There's case that one user activity will have multiple values for a single feature. For example, the movie genre feature in movielens dataset, each movie can have multiple genres. For this case, we suggest to concatenate all movies' genres for the activity sequence. Let's look at one example, if the user activity sequence is\n",
        "```\n",
        "Star Wars: Episode IV - A New Hope (1977), Genres: Action|Adventure|Fantasy\n",
        "Terminator 2: Judgment Day (1991), Genres: Action|Sci-Fi|Thriller\n",
        "Jurassic Park (1993), Genres: Action|Adventure|Sci-Fi\n",
        "```\n",
        "The context_movie_genre feature will be\n",
        "```\n",
        "\"Action, Adventure, Fantasy, Action, Sci-Fi, Thriller, Action, Adventure, Sci-Fi\"\n",
        "```\n",
        "Since TFLite input tensors should be fixed length, so we suggest to pad features to fixed length."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Wff0kYZKS05g"
      },
      "source": [
        "## Vocabularies Generation\n",
        "For String and Integer type features, we would suggest to create an embedding space for each of them, for which vocabularies will be needed. This framework supports txt file based vocabulary setup, for which you can puch vocab iten line-by-line in a txt file. And in the training input pipeline vocabularies will be formed as:\n",
        "\n",
        "```\n",
        "tf.lookup.StaticVocabularyTable(\n",
        "      tf.lookup.TextFileInitializer(\n",
        "          vocab_path,\n",
        "          key_dtype=key_type,\n",
        "          key_index=tf.lookup.TextFileIndex.WHOLE_LINE,\n",
        "          value_dtype=tf.int64,\n",
        "          value_index=tf.lookup.TextFileIndex.LINE_NUMBER,\n",
        "          delimiter='\\t'),\n",
        "      num_oov_buckets)\n",
        "```\n",
        "And vocab_path is the path to the genreated vocabulary txt file.\n",
        "\n",
        "The `data/example_generation_movielens.py` script, generates vocabularies with \"--build_vocabs\" set as true."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eY0T_skNS4Uw"
      },
      "source": [
        "## Try out data preparation\n",
        "Please try out the movielens training examples and vocabs generation script below.\n",
        "\n",
        "Note: If you would like to use your own data, please adapt the data processing script for your specific case."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FQvryCfGtCQX"
      },
      "outputs": [],
      "source": [
        "!python -m data.example_generation_movielens \\\n",
        "  --data_dir=data/raw \\\n",
        "  --output_dir=data/examples \\\n",
        "  --min_timeline_length=3 \\\n",
        "  --max_context_length=10 \\\n",
        "  --max_context_movie_genre_length=32 \\\n",
        "  --min_rating=2 \\\n",
        "  --train_data_fraction=0.9 \\\n",
        "  --build_vocabs=True"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8zcEXFkgCz8g"
      },
      "source": [
        "Raw movielens ratings.dat data is in the following format:\n",
        "UserID::MovieID::Rating::Timestamp\n",
        "\n",
        "*   UserIDs range between 1 and 6040\n",
        "*   MovieIDs range between 1 and 3952\n",
        "*   Ratings are made on a 5-star scale (whole-star ratings only)\n",
        "*   Timestamp is represented in seconds since the epoch as returned by time(2)\n",
        "*   Each user has at least 20 ratings\n",
        "\n",
        "Ref:[movielens readme.txt](http://files.grouplens.org/datasets/movielens/ml-1m-README.txt)\n",
        "\n",
        "In this example, we consider each rating as a movie watch by the users, and construct user movie watch history with rated movie IDs ordering by time.\n",
        "\n",
        "Sample generated training example with max user history as 10:\n",
        "```\n",
        "0 : {   # (tensorflow.Example)\n",
        "  features: {   # (tensorflow.Features)\n",
        "    feature: {\n",
        "      key  : \"context_movie_id\"\n",
        "      value: {\n",
        "        int64_list: {\n",
        "          value: [ 3476, 3264, 2120, 1717, 382, 1644, 2328, 2461, 2064, 3679 ]\n",
        "        }\n",
        "      }\n",
        "    }\n",
        "    feature: {\n",
        "      key  : \"context_movie_year\"\n",
        "      value: {\n",
        "        int64_list: {\n",
        "          value: [ 1990, 1992, 1993, 1997, 1994, 1997, 1998, 1990, 1989, 1981 ]\n",
        "        }\n",
        "      }\n",
        "    }\n",
        "    feature: {\n",
        "      key  : \"context_movie_genre\"\n",
        "      value: {\n",
        "        bytes_list: {\n",
        "          value: [ \"Horror\", \"Mystery\", \"Thriller\", \"Comedy\", \"Horror\", \"Drama\", \"Horror\", \"Horror\", \"Thriller\", \"Drama\", \"Horror\", \"Horror\", \"Mystery\", \"Thriller\", \"Horror\", \"Horror\", \"Comedy\", \"Documentary\", \"Documentary\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\", \"UNK\" ]\n",
        "        }\n",
        "      }\n",
        "    }\n",
        "    feature: {\n",
        "      key  : \"context_movie_rating\"\n",
        "      value: {\n",
        "        float_list: {\n",
        "          value: [ 4.0, 4.0, 3.0, 1.0, 3.0, 3.0, 1.0, 4.0, 3.0, 4.0 ]\n",
        "        }\n",
        "      }\n",
        "    }\n",
        "    feature: {\n",
        "      key  : \"label_movie_id\"\n",
        "      value: {\n",
        "        int64_list: {\n",
        "          value: [ 1361 ]\n",
        "        }\n",
        "      }\n",
        "    }\n",
        "  }\n",
        "}\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GApKDT6gRebW"
      },
      "source": [
        "# Model Configuration"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ubHdVNyJXkVE"
      },
      "source": [
        "##Input Configuration\n",
        "\n",
        "Trainer code will prepare tf datasets and set up model according to input config. You can configurate the following:\n",
        "\n",
        "*   Feature: name, data type, feature length, vocab name, vocab size, embedding dimension.\n",
        "*   Feature group: features to encode together, encoder type.\n",
        "*   Global feature groups: global features, e.g. user age, profession etc.\n",
        "*   Activity feature groups: features to represent activities.\n",
        "*   Label feature: the feature used as label.\n",
        "\n",
        "Both input data processing and model architecture setup will be based on the input configuration.\n",
        "\n",
        "Please check example input config with command below:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Mk7hpSlCta95"
      },
      "outputs": [],
      "source": [
        "!cat configs/sample_input_config.pbtxt"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XgjrV1aRqaOp"
      },
      "source": [
        "You can also see different model graph generated based on different input configs in the appendix section."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XQcQ6AssuBN8"
      },
      "source": [
        "# Train model\n",
        "\n",
        "The training launcher script uses TensorFlow keras compile/fit APIs and performs\n",
        "the following steps to kick off training and evaluation process:\n",
        "\n",
        "*   Set up both train and eval dataset input function.\n",
        "*   Construct keras model according to provided configs, please refer to sample.config file in the source code to config your model architecture, such as embedding dimension, convolutional neural network params, LSTM units etc.\n",
        "*   Setup loss function. In this code base, we leverages customized batch softmax loss function.\n",
        "*   Setup optimizer, with flag specified learning rate and gradient clip if needed.\n",
        "*   Setup evaluation metrics, we provided recall@k metrics by default.\n",
        "*   Compile model with loss function, optimizer and defined metrics.\n",
        "*   Setup callbacks for tensorboard and checkpoint manager.\n",
        "*   Run model.fit with compiled model, where you could specify number of epochs to train, number of train steps in each epoch and number of eval steps in each epoch.\n",
        "\n",
        "To start training please execute command:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3gPKz5InxEbF"
      },
      "outputs": [],
      "source": [
        "!python -m model.recommendation_model_launcher \\\n",
        "  --training_data_filepattern \"data/examples/train_movielens_1m.tfrecord\" \\\n",
        "  --testing_data_filepattern \"data/examples/test_movielens_1m.tfrecord\" \\\n",
        "  --model_dir \"model/model_dir\" \\\n",
        "  --export_dir \"model/model_dir/export_m1\" \\\n",
        "  --vocab_dir \"data/examples\" \\\n",
        "  --input_config_file \"configs/sample_input_config.pbtxt\" \\\n",
        "  --batch_size 32 \\\n",
        "  --learning_rate 0.01 \\\n",
        "  --steps_per_epoch 2 \\\n",
        "  --num_epochs 2 \\\n",
        "  --num_eval_steps 2 \\\n",
        "  --run_mode \"train_and_eval\" \\\n",
        "  --gradient_clip_norm 1.0 \\\n",
        "  --num_predictions 10 \\\n",
        "  --hidden_layer_dims \"32,32\" \\\n",
        "  --eval_top_k \"1,5\" \\\n",
        "  --conv_num_filter_ratios \"2,4\" \\\n",
        "  --conv_kernel_size 4 \\\n",
        "  --lstm_num_units 16"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ObH_mcGcxS96"
      },
      "source": [
        "# Export model\n",
        "\n",
        "Inside launcher script we also provide model exportation functionality.\n",
        "\n",
        "In serve model, the model takes in user context history, for the example case the input is a vector of movie IDs you interacted with. With context encoder, model computes the context embedding vector, at the same time generate candidate embedding vector for all movie candidates in the vocab. By dotproduct and top-k ranking, top-k candidates will be served as the predicted candidates.\n",
        "\n",
        "At model exportation step, you could specify number of predictions you want to get from the output of the model.\n",
        "\n",
        "This step includes:\n",
        "\n",
        "\n",
        "*   Export the model to saved_model with tf.saved_model.save.\n",
        "*   Convert the saved_model to TensorFlow lite with tf.lite.TFLiteConverter.from_saved_model, and save it to the export directory wanted.\n",
        "\n",
        "To export the model, please execute command:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "SH5r6AxHzGrS"
      },
      "outputs": [],
      "source": [
        "!python -m model.recommendation_model_launcher \\\n",
        "  --training_data_filepattern \"data/examples/train_movielens_1m.tfrecord\" \\\n",
        "  --testing_data_filepattern \"data/examples/test_movielens_1m.tfrecord\" \\\n",
        "  --input_config_file \"configs/sample_input_config.pbtxt\" \\\n",
        "  --model_dir \"model/model_dir\" \\\n",
        "  --export_dir \"model/model_dir/export_m2\" \\\n",
        "  --vocab_dir \"data/examples\" \\\n",
        "  --run_mode \"export\" \\\n",
        "  --checkpoint_path \"model/model_dir/ckpt-4\" \\\n",
        "  --num_predictions 10 \\\n",
        "  --hidden_layer_dims \"32,32\" \\\n",
        "  --eval_top_k \"1,5\" \\\n",
        "  --conv_num_filter_ratios \"2,4\" \\\n",
        "  --conv_kernel_size 4 \\\n",
        "  --lstm_num_units 16"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qXMQ5D5JzSgv"
      },
      "source": [
        "# Model inference\n",
        "\n",
        "You could verify your model's performance by running inference with test examples."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "og0qkYavz3Nt"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import tensorflow as tf\n",
        "\n",
        "# Use [0, 1, ... 9] as example input to represent 10 movies that user interacted with.\n",
        "context = tf.range(10)\n",
        "# Path to exported TensorFlow Lite model.\n",
        "tflite_model_path = 'model/model_dir/export/export_m2/model.tflite'  #@param {type:\"string\"}\n",
        "\n",
        "# Create TFLite interpreter.\n",
        "interpreter = tf.lite.Interpreter(tflite_model_path)\n",
        "interpreter.allocate_tensors()\n",
        "input_details = interpreter.get_input_details()\n",
        "output_details = interpreter.get_output_details()\n",
        "print('Display inputs and outputs:')\n",
        "print(input_details)\n",
        "print(output_details)\n",
        "\n",
        "# Find indices.\n",
        "names = [\n",
        "  'serving_default_context_movie_id:0',\n",
        "  'serving_default_context_movie_genre:0',\n",
        "  'serving_default_context_movie_rating:0',\n",
        "]\n",
        "indices = {i['name']: i['index'] for i in input_details}\n",
        "\n",
        "# Fake inputs for illustration. Please change to the real data.\n",
        "# Use [0, 1, ... 9] to represent 10 movies that user interacted with.\n",
        "ids = tf.range(10)\n",
        "interpreter.set_tensor(indices[names[0]], ids)\n",
        "# Use [0, 1, ..., 31] to represent 32 movie genres.\n",
        "genres = tf.range(32)\n",
        "interpreter.set_tensor(indices[names[1]], genres)\n",
        "# Use [1.0, 1.0, ..., 1.0] to represent 10 movie ratings.\n",
        "ratings = tf.ones(10)\n",
        "interpreter.set_tensor(indices[names[2]], ratings)\n",
        "\n",
        "# Run inference.\n",
        "interpreter.invoke()\n",
        "\n",
        "# Get outputs.\n",
        "top_prediction_ids = interpreter.get_tensor(output_details[0]['index'])\n",
        "top_prediction_scores = interpreter.get_tensor(output_details[1]['index'])\n",
        "print('Predicted results:')\n",
        "print('Top ids: {}'.format(top_prediction_ids))\n",
        "print('Top scores: {}'.format(top_prediction_scores))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "A_omMjoT035u"
      },
      "source": [
        "# Integrate in your application\n",
        "\n",
        "We also open source an Android reference app to run inference with TF Lite.\n",
        "**Please follow [`android/app/README.md`](https://github.com/tensorflow/examples/blob/master/lite/examples/recommendation/android/README.md)** to install required developer tools and build Android app."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "N83Ev6nSwsUW"
      },
      "source": [
        "The app uses one pretrained model to illustrate how to run TFLite. If you want to replace the existing model with the one you just trained above, please copy the respective TF Lite model to `assets` folder, and adapt its file name accordingly. If you directly train and export your model in this notebook, your\n",
        "exported model should be located at \"model/model_dir/export/model.tflite\"."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hUaXjqGBvnFP"
      },
      "source": [
        "```shell\n",
        "cp path/to/your/model.tflite ../android/app/src/main/assets/\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bwr7pigOB7pT"
      },
      "source": [
        "The app uses the json file `config.json` to load one model and control how to consume IDs and scores predicted by the TF Lite recommendation model on device. `Config` definition can be found in [`android/app/src/main/java/org/tensorflow/lite/examples/recommendation/Config.java`](../android/app/src/main/java/org/tensorflow/lite/examples/recommendation/Config.java).\n",
        "\n",
        "A sample json is presented below for the built-in model, and you may need to *adapt* it and related code to handle your own trained model.\n",
        "\n",
        "``` json\n",
        "{\n",
        "  \"model\": \"\u003cyour_model\u003e.tflite\",\n",
        "  \"inputs\": [\n",
        "    {\"name\": \"movieFeature\", \"index\": 0, \"inputLength\": 10},\n",
        "    {\"name\": \"genreFeature\", \"index\": 1, \"inputLength\": 32}\n",
        "  ],\n",
        "  \"movieList\": \"sorted_movie_vocab.json\",\n",
        "  \"genreList\": \"movie_genre_vocab.txt\",\n",
        "  \"topK\": 10,\n",
        "  \"outputLength\": 10,\n",
        "  \"outputIdsIndex\": 0,\n",
        "  \"outputScoresIndex\": 1\n",
        "}\n",
        "```\n",
        "# Further reading\n",
        "If you want to read more about the technology related to on-device recommendation, please check out our [blogpost](https://blog.tensorflow.org/2021/04/adaptive-framework-for-on-device-recommendation.html)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2egsu0GMsjyB"
      },
      "source": [
        "# Appendix\n",
        "\n",
        "\u003e Model A: ID-based bag-of-words\n",
        "\n",
        "```\n",
        "activity_feature_groups {\n",
        "  features {\n",
        "    feature_name: \"context_movie_id\"\n",
        "    feature_type: INT\n",
        "    vocab_size: 3953\n",
        "    embedding_dim: 8\n",
        "    feature_length: 10\n",
        "  }\n",
        "  encoder_type: BOW\n",
        "}\n",
        "label_feature {\n",
        "  feature_name: \"label_movie_id\"\n",
        "  feature_type: INT\n",
        "  vocab_size: 3953\n",
        "  embedding_dim: 8\n",
        "  feature_length: 1\n",
        "}\n",
        "```\n",
        "![Screen Shot 2021-03-15 at 4.12.22 PM.png]()\n",
        "\n",
        "\u003e Model B: Feature-based CNN\n",
        "\n",
        "```\n",
        "activity_feature_groups {\n",
        "  features {\n",
        "    feature_name: \"context_movie_id\"\n",
        "    feature_type: INT\n",
        "    vocab_size: 3953\n",
        "    embedding_dim: 8\n",
        "    feature_length: 10\n",
        "  }\n",
        "  features {\n",
        "    feature_name: \"context_movie_rating\"\n",
        "    feature_type: FLOAT\n",
        "    feature_length: 10\n",
        "  }\n",
        "  encoder_type: CNN\n",
        "}\n",
        "activity_feature_groups {\n",
        "  features {\n",
        "    feature_name: \"context_movie_genre\"\n",
        "    feature_type: STRING\n",
        "    vocab_name: \"movie_genre_vocab.txt\"\n",
        "    vocab_size: 19\n",
        "    embedding_dim: 4\n",
        "    feature_length: 32\n",
        "  }\n",
        "  encoder_type: CNN\n",
        "}\n",
        "label_feature {\n",
        "  feature_name: \"label_movie_id\"\n",
        "  feature_type: INT\n",
        "  vocab_size: 3953\n",
        "  embedding_dim: 8\n",
        "  feature_length: 1\n",
        "}\n",
        "```\n",
        "![Screen Shot 2021-03-15 at 4.23.46 PM.png]()"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [],
      "name": "ondevice_recommendation.ipynb",
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
