{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "-Jv7Y4hXwt0j" }, "source": [ "# Question duplicates\n", "\n", "We will explore Siamese networks applied to natural language processing. We will further explore the fundamentals of TensorFlow and we will be able to implement a more complicated structure using it. By completing this project, we will learn how to implement models with different architectures. \n", "\n", "\n", "## Outline\n", "\n", "- [Overview](#0)\n", "- [Part 1: Importing the Data](#1)\n", " - [1.1 Loading in the data](#1.1)\n", " - [1.2 Learn question encoding](#1.2)\n", "- [Part 2: Defining the Siamese model](#2)\n", " - [2.1 Understanding the Siamese Network](#2.1)\n", " - [Exercise 01](#ex01)\n", " - [2.2 Hard Negative Mining](#2.2)\n", " - [Exercise 02](#ex02)\n", "- [Part 3: Training](#3)\n", " - [3.1 Training the model](#3.1)\n", " - [Exercise 03](#ex03)\n", "- [Part 4: Evaluation](#4)\n", " - [4.1 Evaluating your siamese network](#4.1)\n", " - [4.2 Classify](#4.2)\n", " - [Exercise 04](#ex04)\n", "- [Part 5: Testing with your own questions](#5)\n", " - [Exercise 05](#ex05)\n", "- [On Siamese networks](#6)\n", "\n", "\n", "### Overview\n", "In particular, in this assignment you will: \n", "\n", "- Learn about Siamese networks\n", "- Understand how the triplet loss works\n", "- Understand how to evaluate accuracy\n", "- Use cosine similarity between the model's outputted vectors\n", "- Use the data generator to get batches of questions\n", "- Predict using your own model\n", "\n", "By now, you should be familiar with Tensorflow and know how to make use of it to define your model. We will start this homework by asking you to create a vocabulary in a similar way as you did in the previous assignments. After this, you will build a classifier that will allow you to identify whether two questions are the same or not. \n", "\n", "\n", "\n", "\n", "Your model will take in the two questions, which will be transformed into tensors, each tensor will then go through embeddings, and after that an LSTM. Finally you will compare the outputs of the two subnetworks using cosine similarity. \n", "\n", "Before taking a deep dive into the model, you will start by importing the data set, and exploring it a bit.\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "4sF9Hqzgwt0l" }, "source": [ "###### \n", "# Part 1: Importing the Data\n", "\n", "### 1.1 Loading in the data\n", "\n", "You will be using the 'Quora question answer' dataset to build a model that can identify similar questions. This is a useful task because you don't want to have several versions of the same question posted. Several times when teaching I end up responding to similar questions on piazza, or on other community forums. This data set has already been labeled for you. Run the cell below to import some of the packages you will be using. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "deletable": false, "editable": false, "id": "zdACgs491cs2", "outputId": "b31042ef-845b-46b8-c783-185e96b135f7" }, "outputs": [], "source": [ "import os\n", "import numpy as np\n", "import pandas as pd\n", "import random as rnd\n", "import tensorflow as tf\n" ] }, { "cell_type": "code", "execution_count": 85, "metadata": { "deletable": false, "editable": false }, "outputs": [], "source": [ "import w3_unittest" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "3GYhQRMspitx" }, "source": [ "You will now load the data set. We have done some preprocessing for you. If you have taken the deeplearning specialization, this is a slightly different training method than the one you have seen there. If you have not, then don't worry about it, we will explain everything. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 528 }, "colab_type": "code", "deletable": false, "editable": false, "id": "sXWBVGWnpity", "outputId": "afa90d4d-fed7-43b8-bcba-48c95d600ad5", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of question pairs: 404351\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
idqid1qid2question1question2is_duplicate
0012What is the step by step guide to invest in sh...What is the step by step guide to invest in sh...0
1134What is the story of Kohinoor (Koh-i-Noor) Dia...What would happen if the Indian government sto...0
2256How can I increase the speed of my internet co...How can Internet speed be increased by hacking...0
3378Why am I mentally very lonely? How can I solve...Find the remainder when [math]23^{24}[/math] i...0
44910Which one dissolve in water quikly sugar, salt...Which fish would survive in salt water?0
\n", "
" ], "text/plain": [ " id qid1 qid2 question1 \\\n", "0 0 1 2 What is the step by step guide to invest in sh... \n", "1 1 3 4 What is the story of Kohinoor (Koh-i-Noor) Dia... \n", "2 2 5 6 How can I increase the speed of my internet co... \n", "3 3 7 8 Why am I mentally very lonely? How can I solve... \n", "4 4 9 10 Which one dissolve in water quikly sugar, salt... \n", "\n", " question2 is_duplicate \n", "0 What is the step by step guide to invest in sh... 0 \n", "1 What would happen if the Indian government sto... 0 \n", "2 How can Internet speed be increased by hacking... 0 \n", "3 Find the remainder when [math]23^{24}[/math] i... 0 \n", "4 Which fish would survive in salt water? 0 " ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.read_csv(\"./data/questions.csv\")\n", "N = len(data)\n", "print('Number of question pairs: ', N)\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "gkSQTu7Ypit0" }, "source": [ "First, you will need to split the data into a training and test set. The test set will be used later to evaluate your model." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "deletable": false, "editable": false, "id": "z00A7vEMpit1", "outputId": "c12ae7e8-a959-4f56-aa29-6ad34abc1c81", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Train set: 300000 Test set: 10240\n" ] } ], "source": [ "N_train = 300000\n", "N_test = 10240\n", "data_train = data[:N_train]\n", "data_test = data[N_train:N_train + N_test]\n", "print(\"Train set:\", len(data_train), \"Test set:\", len(data_test))\n", "del (data) # remove to free memory" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "FbqIRRyEpit4" }, "source": [ "As explained in the lectures, you will select only the question pairs that are duplicate to train the model.
\n", "You need to build two sets of questions as input for the Siamese network, assuming that question $q1_i$ (question $i$ in the first set) is a duplicate of $q2_i$ (question $i$ in the second set), but all other questions in the second set are not duplicates of $q1_i$. \n", "The test set uses the original pairs of questions and the status describing if the questions are duplicates.\n", "\n", "The following cells are in charge of selecting only duplicate questions from the training set, which will give you a smaller dataset. First find the indexes with duplicate questions.\n", "\n", "You will start by identifying the indexes in the training set which correspond to duplicate questions. For this you will define a boolean variable `td_index`, which has value `True` if the index corresponds to duplicate questions and `False` otherwise." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 51 }, "colab_type": "code", "deletable": false, "editable": false, "id": "Xi_TwXxxpit4", "outputId": "f146046f-9c0d-4d8a-ecf8-8d6a4a5371f7", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of duplicate questions: 111486\n", "Indexes of first ten duplicate questions: [5, 7, 11, 12, 13, 15, 16, 18, 20, 29]\n" ] } ], "source": [ "td_index = data_train['is_duplicate'] == 1\n", "td_index = [i for i, x in enumerate(td_index) if x]\n", "print('Number of duplicate questions: ', len(td_index))\n", "print('Indexes of first ten duplicate questions:', td_index[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will first need to split the data into a training and test set. The test set will be used later to evaluate your model." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 68 }, "colab_type": "code", "deletable": false, "editable": false, "id": "3I9oXSsKpit7", "outputId": "6f6bd3a1-219f-4fb3-a524-450c38bf44ba", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?\n", "I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?\n", "is_duplicate: 1\n" ] } ], "source": [ "print(data_train['question1'][5])\n", "print(data_train['question2'][5])\n", "print('is_duplicate: ', data_train['is_duplicate'][5])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, keep only the rows in the original training set that correspond to the rows where `td_index` is `True`" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "colab": {}, "colab_type": "code", "deletable": false, "editable": false, "id": "XHpZO58Dss_v", "tags": [] }, "outputs": [], "source": [ "Q1_train = np.array(data_train['question1'][td_index])\n", "Q2_train = np.array(data_train['question2'][td_index])\n", "\n", "Q1_test = np.array(data_test['question1'])\n", "Q2_test = np.array(data_test['question2'])\n", "y_test = np.array(data_test['is_duplicate'])" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "P5vBkxunpiuB" }, "source": [ "
Let's print to see what your data looks like." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 170 }, "colab_type": "code", "deletable": false, "editable": false, "id": "joyrS1XEpLWn", "outputId": "3257cde7-3164-40d9-910e-fa91eae917a0", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TRAINING QUESTIONS:\n", "\n", "Question 1: Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?\n", "Question 2: I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me? \n", "\n", "Question 1: What would a Trump presidency mean for current international master’s students on an F1 visa?\n", "Question 2: How will a Trump presidency affect the students presently in US or planning to study in US? \n", "\n", "TESTING QUESTIONS:\n", "\n", "Question 1: How do I prepare for interviews for cse?\n", "Question 2: What is the best way to prepare for cse? \n", "\n", "is_duplicate = 0 \n", "\n" ] } ], "source": [ "print('TRAINING QUESTIONS:\\n')\n", "print('Question 1: ', Q1_train[0])\n", "print('Question 2: ', Q2_train[0], '\\n')\n", "print('Question 1: ', Q1_train[5])\n", "print('Question 2: ', Q2_train[5], '\\n')\n", "\n", "print('TESTING QUESTIONS:\\n')\n", "print('Question 1: ', Q1_test[0])\n", "print('Question 2: ', Q2_test[0], '\\n')\n", "print('is_duplicate =', y_test[0], '\\n')" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "SuggGPaQpiuY" }, "source": [ "Finally, split your training set into training/validation sets so that you can use them at training time." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of duplicate questions: 111486\n", "The length of the training set is: 89188\n", "The length of the validation set is: 22298\n" ] } ], "source": [ "# Splitting the data\n", "cut_off = int(len(Q1_train) * 0.8)\n", "train_Q1, train_Q2 = Q1_train[:cut_off], Q2_train[:cut_off]\n", "val_Q1, val_Q2 = Q1_train[cut_off:], Q2_train[cut_off:]\n", "print('Number of duplicate questions: ', len(Q1_train))\n", "print(\"The length of the training set is: \", len(train_Q1))\n", "print(\"The length of the validation set is: \", len(val_Q1))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "BDcxEmX31y3d" }, "source": [ "\n", "### 1.2 Learning question encoding\n", "\n", "The next step is to learn how to encode each of the questions as a list of numbers (integers). You will be learning how to encode each word of the selected duplicate pairs with an index. \n", "\n", "You will start by learning a word dictionary, or vocabulary, containing all the words in your training dataset, which you will use to encode each word of the selected duplicate pairs with an index. \n", "\n", "For this task you will be using the [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/TextVectorization) layer from Keras. which will take care of everything for you. Begin by setting a seed, so we all get the same encoding.\n", "\n", "The vocabulary is learned using the `.adapt()`. This will analyze the dataset, determine the frequency of individual string values, and create a vocabulary from them. If you need, you can later access the vocabulary by using `.get_vocabulary()`." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [], "source": [ "tf.random.set_seed(0)\n", "text_vectorization = tf.keras.layers.TextVectorization(output_mode='int',split='whitespace', standardize='strip_punctuation')\n", "text_vectorization.adapt(np.concatenate((Q1_train,Q2_train)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, it is set to split text on whitespaces and it's stripping the punctuation from text. You can check how big your vocabulary is." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Vocabulary size: 36224\n" ] } ], "source": [ "print(f'Vocabulary size: {text_vectorization.vocabulary_size()}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also call `text_vectorization` to see what the encoding looks like for the first questions of the training and test datasets" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "first question in the train set:\n", "\n", "Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me? \n", "\n", "encoded version:\n", "tf.Tensor(\n", "[ 6984 6 178 10 8988 2442 35393 761 13 6636 28205 31\n", " 28 483 45 98], shape=(16,), dtype=int64) \n", "\n", "first question in the test set:\n", "\n", "How do I prepare for interviews for cse? \n", "\n", "encoded version:\n", "tf.Tensor([ 4 8 6 160 17 2079 17 11775], shape=(8,), dtype=int64)\n" ] } ], "source": [ "print('first question in the train set:\\n')\n", "print(Q1_train[0], '\\n') \n", "print('encoded version:')\n", "print(text_vectorization(Q1_train[0]),'\\n')\n", "\n", "print('first question in the test set:\\n')\n", "print(Q1_test[0], '\\n')\n", "print('encoded version:')\n", "print(text_vectorization(Q1_test[0]) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Expected output:\n", "```\n", "first question in the train set:\n", "\n", "Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me? \n", "\n", "encoded version:\n", "tf.Tensor(\n", "[ 6984 6 178 10 8988 2442 35393 761 13 6636 28205 31\n", " 28 483 45 98], shape=(16,), dtype=int64) \n", "\n", "first question in the test set:\n", "\n", "How do I prepare for interviews for cse? \n", "\n", "encoded version:\n", "tf.Tensor([ 4 8 6 160 17 2079 17 11775], shape=(8,), dtype=int64)\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "KmZRBoaMwt0w" }, "source": [ "\n", "# Part 2: Defining the Siamese model\n", "\n", "\n", "\n", "### 2.1 Understanding the Siamese Network \n", "A Siamese network is a neural network which uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. The Siamese network you are about to implement looks something like this:\n", "\n", "\n", "\n", "You get the question, get it vectorized and embedded, run it through an LSTM layer, normalize $v_1$ and $v_2$, and finally get the corresponding cosine similarity for each pair of questions (remember that each question is a single string). Because of the implementation of the loss function you will see in the next section, you are not going to have the cosine similarity as output of your Siamese network, but rather $v_1$ and $v_2$. You will add the cosine distance step once you reach the classification step. \n", "\n", "To train the model, you will use the triplet loss (explained below). This loss makes use of a baseline (anchor) input that is compared to a positive (truthy) input and a negative (falsy) input. The (cosine) distance from the baseline input to the positive input is minimized, and the distance from the baseline input to the negative input is maximized. Mathematically, you are trying to maximize the following.\n", "\n", "$$\\mathcal{L}(A, P, N)=\\max \\left(\\|\\mathrm{f}(A)-\\mathrm{f}(P)\\|^{2}-\\|\\mathrm{f}(A)-\\mathrm{f}(N)\\|^{2}+\\alpha, 0\\right),$$\n", "\n", "where $A$ is the anchor input, for example $q1_1$, $P$ is the duplicate input, for example, $q2_1$, and $N$ is the negative input (the non duplicate question), for example $q2_2$.
\n", "$\\alpha$ is a margin; you can think about it as a safety net, or by how much you want to push the duplicates from the non duplicates. This is the essence of the triplet loss. However, as you will see in the next section, you will be using a pretty smart trick to improve your training, known as hard negative mining. \n", "
\n", "\n", "\n", "### Exercise 01\n", "\n", "**Instructions:** Implement the `Siamese` function below. You should be using all the functions explained below. \n", "\n", "To implement this model, you will be using `TensorFlow`. Concretely, you will be using the following functions.\n", "\n", "\n", "- [`tf.keras.models.Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential): groups a linear stack of layers into a tf.keras.Model.\n", " - You can pass in the layers as arguments to `Serial`, separated by commas, or simply instantiate the `Sequential`model and use the `add` method to add layers.\n", " - For example: `Sequential(Embeddings(...), AveragePooling1D(...), Dense(...), Softmax(...))` or \n", " \n", " `model = Sequential()\n", " model.add(Embeddings(...))\n", " model.add(AveragePooling1D(...))\n", " model.add(Dense(...))\n", " model.add(Softmax(...))`\n", "\n", "- [`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) : Maps positive integers into vectors of fixed size. It will have shape (vocabulary length X dimension of output vectors). The dimension of output vectors (called `d_feature`in the model) is the number of elements in the word embedding. \n", " - `Embedding(input_dim, output_dim)`.\n", " - `input_dim` is the number of unique words in the given vocabulary.\n", " - `output_dim` is the number of elements in the word embedding (some choices for a word embedding size range from 150 to 300, for example).\n", " \n", "\n", "\n", "- [`tf.keras.layers.LSTM`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM) : The LSTM layer. The number of units should be specified and should match the number of elements in the word embedding. \n", " - `LSTM(units)` Builds an LSTM layer of n_units.\n", " \n", " \n", " \n", "- [`tf.keras.layers.GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) : Computes global average pooling, which essentially takes the mean across a desired axis. GlobalAveragePooling1D uses one tensor axis to form groups of values and replaces each group with the mean value of that group. \n", " - `GlobalAveragePooling1D()` takes the mean.\n", "\n", "\n", "\n", "- [`tf.keras.layers.Lambda`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.base.Fn): Layer with no weights that applies the function f, which should be specified using a lambda syntax. You will use this layer to apply normalization with the function\n", " - `tfmath.l2_normalize(x)`\n", "\n", "\n", "\n", "- [`tf.keras.layers.Input`](https://www.tensorflow.org/api_docs/python/tf/keras/Input): it is used to instantiate a Keras tensor. Remember to set correctly the dimension and type of the input, which are batches of questions. For this, keep in mind that each question is a single string. \n", " - `Input(input_shape,dtype=None,...)`\n", " - `input_shape`: Shape tuple (not including the batch axis)\n", " - `dtype`: (optional) data type of the input\n", "\n", "\n", "\n", "- [`tf.keras.layers.Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Concatenate): Layer that concatenates a list of inputs. This layer will concatenate the normalized outputs of each LSTM into a single output for the model. \n", " - `Concatenate()`" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "deletable": false, "tags": [ "graded" ] }, "outputs": [], "source": [ "# GRADED FUNCTION: Siamese\n", "def Siamese(text_vectorizer, vocab_size=36224, d_feature=128):\n", " \"\"\"Returns a Siamese model.\n", "\n", " Args:\n", " text_vectorizer (TextVectorization): TextVectorization instance, already adapted to your training data.\n", " vocab_size (int, optional): Length of the vocabulary. Defaults to 36224, which is the vocabulary size for your case.\n", " d_model (int, optional): Depth of the model. Defaults to 128.\n", " \n", " Returns:\n", " tf.model.Model: A Siamese model. \n", " \n", " \"\"\"\n", " ### START CODE HERE ###\n", "\n", " branch = tf.keras.models.Sequential(name='sequential') \n", " # Add the text_vectorizer layer. This is the text_vectorizer you instantiated and trained before \n", " branch.add(text_vectorizer)\n", " # Add the Embedding layer. Remember to call it 'embedding' using the parameter `name`\n", " branch.add(tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=d_feature, name='embedding'))\n", " # Add the LSTM layer, recall from W2 that you want to the LSTM layer to return sequences, ot just one value. \n", " # Remember to call it 'LSTM' using the parameter `name`\n", " branch.add(tf.keras.layers.LSTM(units=d_feature, return_sequences=True, name='LSTM'))\n", " # Add the GlobalAveragePooling1D layer. Remember to call it 'mean' using the parameter `name`\n", " branch.add(tf.keras.layers.GlobalAveragePooling1D(name='mean'))\n", " \n", " # Add the normalization layer using Lambda\n", " branch.add(tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1), name='out'))\n", " \n", " # Define both inputs. Remember to call then 'input_1' and 'input_2' using the `name` parameter. \n", " # Be mindful of the data type and size\n", " input1 = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name='input_1')\n", " input2 = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name='input_2')\n", " # Define the output of each branch of your Siamese network. Remember that both branches have the same coefficients, \n", " # but they each receive different inputs.\n", " branch1 = branch(input1)\n", " branch2 = branch(input2)\n", " # Define the Concatenate layer. You should concatenate columns, you can fix this using the `axis`parameter. \n", " # This layer is applied over the outputs of each branch of the Siamese network\n", " conc = tf.keras.layers.Concatenate(axis=-1, name='conc_1_2')([branch1, branch2])\n", " \n", " ### END CODE HERE ###\n", " \n", " return tf.keras.models.Model(inputs=[input1, input2], outputs=conc, name=\"SiameseModel\")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "es2gfwZypiul" }, "source": [ "Setup the Siamese network model" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 255 }, "colab_type": "code", "deletable": false, "editable": false, "id": "kvQ_jf52-JAn", "outputId": "d409460d-2ffb-4ae6-8745-ddcfa1d892ad", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From c:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\keras\\src\\backend\\tensorflow\\core.py:204: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n", "\n" ] }, { "data": { "text/html": [ "
Model: \"SiameseModel\"\n",
       "
\n" ], "text/plain": [ "\u001b[1mModel: \"SiameseModel\"\u001b[0m\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┓\n",
       "┃ Layer (type)         Output Shape          Param #  Connected to      ┃\n",
       "┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━┩\n",
       "│ input_1             │ (None, 1)         │          0 │ -                 │\n",
       "│ (InputLayer)        │                   │            │                   │\n",
       "├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
       "│ input_2             │ (None, 1)         │          0 │ -                 │\n",
       "│ (InputLayer)        │                   │            │                   │\n",
       "├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
       "│ sequential          │ (None, 128)       │  4,768,256 │ input_1[0][0],    │\n",
       "│ (Sequential)        │                   │            │ input_2[0][0]     │\n",
       "├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
       "│ conc_1_2            │ (None, 256)       │          0 │ sequential[0][0], │\n",
       "│ (Concatenate)       │                   │            │ sequential[1][0]  │\n",
       "└─────────────────────┴───────────────────┴────────────┴───────────────────┘\n",
       "
\n" ], "text/plain": [ "┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mConnected to \u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━┩\n", "│ input_1 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │ - │\n", "│ (\u001b[38;5;33mInputLayer\u001b[0m) │ │ │ │\n", "├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n", "│ input_2 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │ - │\n", "│ (\u001b[38;5;33mInputLayer\u001b[0m) │ │ │ │\n", "├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n", "│ sequential │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m4,768,256\u001b[0m │ input_1[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m], │\n", "│ (\u001b[38;5;33mSequential\u001b[0m) │ │ │ input_2[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m] │\n", "├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n", "│ conc_1_2 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m256\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │ sequential[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m], │\n", "│ (\u001b[38;5;33mConcatenate\u001b[0m) │ │ │ sequential[\u001b[38;5;34m1\u001b[0m][\u001b[38;5;34m0\u001b[0m] │\n", "└─────────────────────┴───────────────────┴────────────┴───────────────────┘\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Total params: 4,768,256 (18.19 MB)\n",
       "
\n" ], "text/plain": [ "\u001b[1m Total params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Trainable params: 4,768,256 (18.19 MB)\n",
       "
\n" ], "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Non-trainable params: 0 (0.00 B)\n",
       "
\n" ], "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
Model: \"sequential\"\n",
       "
\n" ], "text/plain": [ "\u001b[1mModel: \"sequential\"\u001b[0m\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
       "┃ Layer (type)                     Output Shape                  Param # ┃\n",
       "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
       "│ text_vectorization              │ (None, None)           │             0 │\n",
       "│ (TextVectorization)             │                        │               │\n",
       "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
       "│ embedding (Embedding)           │ (None, None, 128)      │     4,636,672 │\n",
       "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
       "│ LSTM (LSTM)                     │ (None, None, 128)      │       131,584 │\n",
       "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
       "│ mean (GlobalAveragePooling1D)   │ (None, 128)            │             0 │\n",
       "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
       "│ out (Lambda)                    │ (None, 128)            │             0 │\n",
       "└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
       "
\n" ], "text/plain": [ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n", "│ text_vectorization │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "│ (\u001b[38;5;33mTextVectorization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ embedding (\u001b[38;5;33mEmbedding\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m4,636,672\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ LSTM (\u001b[38;5;33mLSTM\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m131,584\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ mean (\u001b[38;5;33mGlobalAveragePooling1D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ out (\u001b[38;5;33mLambda\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n", "└─────────────────────────────────┴────────────────────────┴───────────────┘\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Total params: 4,768,256 (18.19 MB)\n",
       "
\n" ], "text/plain": [ "\u001b[1m Total params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Trainable params: 4,768,256 (18.19 MB)\n",
       "
\n" ], "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Non-trainable params: 0 (0.00 B)\n",
       "
\n" ], "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# check your model\n", "model = Siamese(text_vectorization, vocab_size=text_vectorization.vocabulary_size())\n", "model.build(input_shape=None)\n", "model.summary()\n", "model.get_layer(name='sequential').summary()" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "LMK9zqhHpiuo" }, "source": [ "**Expected output:** \n", "\n", "\n", "\n", "```Model: \"SiameseModel\"\n", "__________________________________________________________________________________________________\n", " Layer (type) Output Shape Param # Connected to \n", "==================================================================================================\n", " input_1 (InputLayer) [(None, 1)] 0 [] \n", " \n", " input_2 (InputLayer) [(None, 1)] 0 [] \n", " \n", " sequential (Sequential) (None, 128) 4768256 ['input_1[0][0]', \n", " 'input_2[0][0]'] \n", " \n", " conc_1_2 (Concatenate) (None, 256) 0 ['sequential[0][0]', \n", " 'sequential[1][0]'] \n", " \n", "==================================================================================================\n", "Total params: 4768256 (18.19 MB)\n", "Trainable params: 4768256 (18.19 MB)\n", "Non-trainable params: 0 (0.00 Byte)\n", "__________________________________________________________________________________________________\n", "Model: \"sequential\"\n", "_________________________________________________________________\n", " Layer (type) Output Shape Param # \n", "=================================================================\n", " text_vectorization (TextVe (None, None) 0 \n", " ctorization) \n", " \n", " embedding (Embedding) (None, None, 128) 4636672 \n", " \n", " LSTM (LSTM) (None, None, 128) 131584 \n", " \n", " mean (GlobalAveragePooling (None, 128) 0 \n", " 1D) \n", " \n", " out (Lambda) (None, 128) 0 \n", " \n", "=================================================================\n", "Total params: 4768256 (18.19 MB)\n", "Trainable params: 4768256 (18.19 MB)\n", "Non-trainable params: 0 (0.00 Byte)\n", "_________________________________________________________________\n", "```\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also draw the model for a clearer view of your Siamese network" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "You must install pydot (`pip install pydot`) for `plot_model` to work.\n" ] } ], "source": [ "tf.keras.utils.plot_model(\n", " model,\n", " to_file=\"model.png\",\n", " show_shapes=True,\n", " show_dtype=True,\n", " show_layer_names=True,\n", " rankdir=\"TB\",\n", " expand_nested=True)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "KVo1Gvripiuo" }, "source": [ "\n", "\n", "### 2.2 Hard Negative Mining\n", "\n", "\n", "You will now implement the `TripletLoss` with hard negative mining.
\n", "As explained in the lecture, you will be using all the questions from each batch to compute this loss. Positive examples are questions $q1_i$, and $q2_i$, while all the other combinations $q1_i$, $q2_j$ ($i\\neq j$), are considered negative examples. The loss will be composed of two terms. One term utilizes the mean of all the non duplicates, the second utilizes the *closest negative*. Our loss expression is then:\n", " \n", "\\begin{align}\n", " \\mathcal{Loss_1(A,P,N)} &=\\max \\left( -cos(A,P) + mean_{neg} +\\alpha, 0\\right) \\\\\n", " \\mathcal{Loss_2(A,P,N)} &=\\max \\left( -cos(A,P) + closest_{neg} +\\alpha, 0\\right) \\\\\n", "\\mathcal{Loss(A,P,N)} &= mean(Loss_1 + Loss_2) \\\\\n", "\\end{align}\n", "\n", "\n", "Further, two sets of instructions are provided. The first set, found just below, provides a brief description of the task. If that set proves insufficient, a more detailed set can be displayed. \n", "\n", "\n", "### Exercise 02\n", "\n", "**Instructions (Brief):** Here is a list of things you should do:
\n", "\n", "- As this will be run inside Tensorflow, use all operation supplied by `tf.math` or `tf.linalg`, instead of `numpy` functions. You will also need to explicitly use `tf.shape` to get the batch size from the inputs. This is to make it compatible with the Tensor inputs it will receive when doing actual training and testing. \n", "- Use [`tf.linalg.matmul`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul) to calculate the similarity matrix $v_2v_1^T$ of dimension `batch_size` x `batch_size`. \n", "- Take the score of the duplicates on the diagonal with [`tf.linalg.diag_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/diag_part). \n", "- Use the `TensorFlow` functions [`tf.eye`](https://www.tensorflow.org/api_docs/python/tf/eye) and [`tf.math.reduce_max`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_max) for the identity matrix and the maximum respectively. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "GWsX-Wz3piup" }, "source": [ "
\n", "\n", " More Detailed Instructions \n", "\n", "\n", "We'll describe the algorithm using a detailed example. Below, $V_1$, $V_2$ are the output of the normalization blocks in our model. Here you will use a `batch_size` of 4 and a `d_model of 3`. As explained in lecture, the input questions, Q1, Q2 are arranged so that corresponding inputs are duplicates while non-corresponding entries are not. The outputs will have the same pattern.\n", "\n", "\n", "\n", "This testcase arranges the outputs, $V_1$,$V_2$, to highlight different scenarios. Here, the first two outputs $V_1[0]$, $V_2[0]$ match exactly, so the model is generating the same vector for Q1[0] and Q2[0] inputs. The second pair of outputs, circled in orange, differ greatly on one of the values, so the transformation is not quite the same for these questions. Next, you have examples $V_1[3]$ and $V_2[3]$, which match almost exactly. Finally, $V_1[4]$ and $V_2[4]$, circled in purple, are set to be exactly opposite, being 180 degrees from each other. \n", "\n", "The first step is to compute the cosine similarity matrix or `score` in the code. As explained in the lectures, this is $$V_2 V_1^T.$$This is generated with `tf.linalg.matmul`. Since matrix multiplication is not commutative, the order in which you pass the arguments is important. If you want columns to represent different questions in Q1 and rows to represent different questions in Q2, as seen in the video, then you need to compute $V_2 V_1^T$. \n", "\n", "\n", "\n", "The clever arrangement of inputs creates the data needed for positive *and* negative examples without having to run all pair-wise combinations. Because Q1[n] is a duplicate of only Q2[n], other combinations are explicitly created negative examples or *Hard Negative* examples. The matrix multiplication efficiently produces the cosine similarity of all positive/negative combinations as shown above on the left side of the diagram. 'Positive' are the results of duplicate examples (cells shaded in green) and 'negative' are the results of explicitly created negative examples (cells shaded in blue). The results for our test case are as expected, $V_1[0]\\cdot V_2[0]$ and $V_1[3]\\cdot V_2[3]$ match producing '1', and '0.99' respectively, while the other 'positive' cases don't match quite right. Note also that the $V_2[2]$ example was set to match $V_1[3]$, producing a not so good match at `score[2,2]` and an undesired 'negative' case of a '1', shown in grey. \n", "\n", "With the similarity matrix (`score`) you can begin to implement the loss equations. First, you can extract $cos(A,P)$ by utilizing `tf.linalg.diag_part`. The goal is to grab all the green entries in the diagram above. This is `positive` in the code.\n", "\n", "Next, you will create the *closest_negative*. This is the nonduplicate entry in $V_2$ that is closest to (has largest cosine similarity) to an entry in $V_1$, but still has smaller cosine similarity than the positive example. For example, consider row 2 in the score matrix. This row has the cosine similarity between $V_2[2]$ and all four vectors in $V_1$. In this case, the largest value in the off-diagonal is`score[2,3]`$=V_2[3]\\cdot V1[2]$, which has a score of 1. However, since 1 is grater than the similarity for the positive example, this is *not* the *closest_negative*. For this particular row, the *closes_negative* will have to be `score[2,1]=0.36`. This is the maximum value of the 'negative' entries, which are smaller than the 'positive' example.\n", "\n", "To implement this, you need to pick the maximum entry on a row of `score`, ignoring the 'positive'/green entries, and 'negative/blue entry greater that the 'positive' one. To avoid selecting these entries, you can make them larger negative numbers. For this, you can create a mask to identify these two scenarios, multiply it by 2.0 and subtract it out of `scores`. To create the mask, you need to check if the cell is diagonal by computing `tf.eye(batch_size) ==1`, or if the non-diagonal cell is greater than the diagonal with `(negative_zero_on_duplicate > tf.expand_dims(positive, 1)`. Remember that `positive` already has the diagonal values. Now you can use `tf.math.reduce_max`, row by row (axis=1), to select the maximum which is `closest_negative`.\n", "\n", "Next, we'll create *mean_negative*. As the name suggests, this is the mean of all the 'negative'/blue values in `score` on a row by row basis. You can use `tf.linalg.diag` to create a diagonal matrix, where the diagonal matches `positive`, and just subtract it from `score` to get just the 'negative' values. This is `negative_zero_on_duplicate` in the code. Compute the mean by using `tf.math.reduce_sum` on `negative_zero_on_duplicate` for `axis=1` and divide it by `(batch_size - 1)`. This is `mean_negative`.\n", "\n", "Now, you can compute loss using the two equations above and `tf.maximum`. This will form `triplet_loss1` and `triplet_loss2`. \n", "\n", "`triplet_loss` is the `tf.math.reduce_sum` of the sum of the two individual losses.\n" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "deletable": false, "tags": [ "graded" ] }, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "def TripletLossFn(v1, v2, margin=0.25):\n", " \"\"\"Custom Loss function.\n", "\n", " Args:\n", " v1 (numpy.ndarray or Tensor): Array with dimension (batch_size, model_dimension) associated to Q1.\n", " v2 (numpy.ndarray or Tensor): Array with dimension (batch_size, model_dimension) associated to Q2.\n", " margin (float, optional): Desired margin. Defaults to 0.25.\n", "\n", " Returns:\n", " triplet_loss (numpy.ndarray or Tensor)\n", " \"\"\"\n", "\n", " ### START CODE HERE ###\n", "\n", " # use `tf.linalg.matmul` to take the dot product of the two batches. \n", " # Don't forget to transpose the second argument using `transpose_b=True`\n", " scores = tf.linalg.matmul(v1, v2, transpose_b=True)\n", " # calculate new batch size and cast it as the same datatype as scores.\n", " batch_size = tf.cast(tf.shape(v1)[0], scores.dtype) \n", " \n", " # use `tf.linalg.diag_part` to grab the cosine similarity of all positive examples\n", " positive = tf.linalg.diag_part(scores)\n", " \n", " # subtract the diagonal from scores. You can do this by creating a diagonal matrix with the values \n", " # of all positive examples using `tf.linalg.diag`\n", " negative_zero_on_duplicate = scores - tf.linalg.diag(positive)\n", " \n", " # use `tf.math.reduce_sum` on `negative_zero_on_duplicate` for `axis=1` and divide it by `(batch_size - 1)`\n", " mean_negative = tf.math.reduce_sum(negative_zero_on_duplicate, axis=1) / (batch_size - 1)\n", " \n", " # create a composition of two masks: \n", " # the first mask to extract the diagonal elements (make sure you use the variable batch_size here), \n", " # the second mask to extract elements in the negative_zero_on_duplicate matrix that are larger than the elements in the diagonal \n", " mask_exclude_positives = tf.cast((tf.eye(batch_size) == 1)|(negative_zero_on_duplicate > tf.reshape(positive, (batch_size, 1))),\n", " scores.dtype)\n", " \n", " # multiply `mask_exclude_positives` with 2.0 and subtract it out of `negative_zero_on_duplicate`\n", " negative_without_positive = negative_zero_on_duplicate - (mask_exclude_positives*2.0)\n", " \n", " # take the row by row `max` of `negative_without_positive`. \n", " # Hint: `tf.math.reduce_max(negative_without_positive, axis = None)`\n", " closest_negative = tf.math.reduce_max(negative_without_positive, axis = 0)\n", " \n", " # compute `tf.maximum` among 0.0 and `A`\n", " # A = subtract `positive` from `margin` and add `closest_negative` \n", " triplet_loss1 = tf.maximum(0.0, margin - positive + closest_negative)\n", " \n", " # compute `tf.maximum` among 0.0 and `B`\n", " # B = subtract `positive` from `margin` and add `mean_negative` \n", " triplet_loss2 = tf.maximum(0.0, margin - positive + mean_negative)\n", " \n", " # add the two losses together and take the `tf.math.reduce_sum` of it\n", " triplet_loss = tf.math.reduce_sum(triplet_loss1+triplet_loss2)\n", "\n", " ### END CODE HERE ###\n", "\n", " return triplet_loss\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you can check the triplet loss between two sets. The following example emulates the triplet loss between two groups of questions with `batch_size=2`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 25, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Triplet Loss: 2.499999993789265\n" ] } ], "source": [ "v1 = np.array([[0.26726124, 0.53452248, 0.80178373],[0.5178918 , 0.57543534, 0.63297887]])\n", "v2 = np.array([[ 0.26726124, 0.53452248, 0.80178373],[-0.5178918 , -0.57543534, -0.63297887]])\n", "print(\"Triplet Loss:\", TripletLossFn(v1,v2).numpy())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output:**\n", "```CPP\n", "Triplet Loss: ~ 0.70\n", "``` " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "r974ozuHYAom" }, "source": [ "To recognize it as a loss function, keras needs it to have two inputs: true labels, and output labels. You will not be using the true labels, but you still need to pass some dummy variable with size `(batch_size,)` for TensorFlow to accept it as a valid loss.\n", "\n", "Additionally, the `out` parameter must coincide with the output of your Siamese network, which is the concatenation of the processing of each of the inputs, so you need to extract $v_1$ and $v_2$ from there." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "deletable": false, "editable": false, "tags": [ "graded" ] }, "outputs": [], "source": [ "def TripletLoss(labels, out, margin=0.25):\n", " _, out_size = out.shape # get embedding size\n", " v1 = out[:,:int(out_size/2)] # Extract v1 from out\n", " v2 = out[:,int(out_size/2):] # Extract v2 from out\n", " return TripletLossFn(v1, v2, margin=margin)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "lsvjaCQ6wt02" }, "source": [ "\n", "\n", "# Part 3: Training\n", "\n", "Now it's time to finally train your model. As usual, you have to define the cost function and the optimizer. You also have to build the actual model you will be training. \n", "\n", "To pass the input questions for training and validation you will use the iterator produced by [`tensorflow.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset). Run the next cell to create your train and validation datasets. " ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [], "source": [ "train_dataset = tf.data.Dataset.from_tensor_slices(((train_Q1, train_Q2),tf.constant([1]*len(train_Q1))))\n", "val_dataset = tf.data.Dataset.from_tensor_slices(((val_Q1, val_Q2),tf.constant([1]*len(val_Q1))))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "IgFMfH5awt07" }, "source": [ "\n", "\n", "### 3.1 Training the model\n", "\n", "You will now write a function that takes in your model to train it. To train your model you have to decide how many times you want to iterate over the entire data set; each iteration is defined as an `epoch`. For each epoch, you have to go over all the data, using your `Dataset` iterator.\n", "\n", "\n", "### Exercise 03\n", "\n", "**Instructions:** Implement the `train_model` below to train the neural network above. Here is a list of things you should do: \n", "\n", "- Compile the model. Here you will need to pass in:\n", " - `loss=TripletLoss`\n", " - `optimizer=Adam()` with learning rate `lr`\n", "- Call the `fit` method. You should pass:\n", " - `train_dataset`\n", " - `epochs`\n", " - `validation_data` \n", "\n", "\n", "\n", "You will be using your triplet loss function with Adam optimizer. Also, note that you are not explicitly defining the batch size, because it will be already determined by the `Dataset`.\n", "\n", "This function will return the trained model" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 391 }, "colab_type": "code", "deletable": false, "id": "-3KXjmBo_6Xa", "outputId": "9d57f731-1534-4218-e744-783359d5cd19", "scrolled": true, "tags": [ "graded" ] }, "outputs": [], "source": [ "# GRADED FUNCTION: train_model\n", "def train_model(Siamese, TripletLoss, text_vectorizer, train_dataset, val_dataset, d_feature=128, lr=0.01, train_steps=5):\n", " \"\"\"Training the Siamese Model\n", "\n", " Args:\n", " Siamese (function): Function that returns the Siamese model.\n", " TripletLoss (function): Function that defines the TripletLoss loss function.\n", " text_vectorizer: trained instance of `TextVecotrization` \n", " train_dataset (tf.data.Dataset): Training dataset\n", " val_dataset (tf.data.Dataset): Validation dataset\n", " d_feature (int, optional) = size of the encoding. Defaults to 128.\n", " lr (float, optional): learning rate for optimizer. Defaults to 0.01\n", " train_steps (int): number of epochs\n", " \n", " Returns:\n", " tf.keras.Model\n", " \"\"\"\n", " ## START CODE HERE ###\n", "\n", " # Instantiate your Siamese model\n", " model = Siamese(text_vectorizer,\n", " vocab_size = text_vectorizer.vocabulary_size(), #set vocab_size accordingly to the size of your vocabulary\n", " d_feature = d_feature)\n", " # Compile the model\n", " model.compile(loss=TripletLoss,\n", " optimizer = tf.optimizers.Adam(learning_rate=lr)\n", " )\n", " # Train the model \n", " model.fit(train_dataset,\n", " epochs = train_steps,\n", " validation_data = val_dataset,\n", " )\n", " \n", " ### END CODE HERE ###\n", "\n", " return model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now call the `train_model` function. You will be using a batch size of 256. \n", "\n", "To create the data generators you will be using the method `batch` for `Dataset` object. You will also call the `shuffle` method, to shuffle the dataset on each iteration." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "deletable": false, "editable": false, "scrolled": false, "tags": [] }, "outputs": [ { "ename": "NameError", "evalue": "name 'train_dataset' is not defined", "output_type": "error", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)", "Cell \u001b[1;32mIn[2], line 3\u001b[0m\n\u001b[0;32m 1\u001b[0m train_steps \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m2\u001b[39m\n\u001b[0;32m 2\u001b[0m batch_size \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m256\u001b[39m\n\u001b[1;32m----> 3\u001b[0m train_generator \u001b[38;5;241m=\u001b[39m \u001b[43mtrain_dataset\u001b[49m\u001b[38;5;241m.\u001b[39mshuffle(\u001b[38;5;28mlen\u001b[39m(train_Q1),\n\u001b[0;32m 4\u001b[0m seed\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m7\u001b[39m, \n\u001b[0;32m 5\u001b[0m reshuffle_each_iteration\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\u001b[38;5;241m.\u001b[39mbatch(batch_size\u001b[38;5;241m=\u001b[39mbatch_size)\n\u001b[0;32m 6\u001b[0m val_generator \u001b[38;5;241m=\u001b[39m val_dataset\u001b[38;5;241m.\u001b[39mshuffle(\u001b[38;5;28mlen\u001b[39m(val_Q1), \n\u001b[0;32m 7\u001b[0m seed\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m7\u001b[39m,\n\u001b[0;32m 8\u001b[0m reshuffle_each_iteration\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\u001b[38;5;241m.\u001b[39mbatch(batch_size\u001b[38;5;241m=\u001b[39mbatch_size)\n\u001b[0;32m 9\u001b[0m model \u001b[38;5;241m=\u001b[39m train_model(Siamese, TripletLoss,text_vectorization, \n\u001b[0;32m 10\u001b[0m train_generator, \n\u001b[0;32m 11\u001b[0m val_generator, \n\u001b[0;32m 12\u001b[0m train_steps\u001b[38;5;241m=\u001b[39mtrain_steps,)\n", "\u001b[1;31mNameError\u001b[0m: name 'train_dataset' is not defined" ] } ], "source": [ "train_steps = 2\n", "batch_size = 256\n", "train_generator = train_dataset.shuffle(len(train_Q1),\n", " seed=7, \n", " reshuffle_each_iteration=True).batch(batch_size=batch_size)\n", "val_generator = val_dataset.shuffle(len(val_Q1), \n", " seed=7,\n", " reshuffle_each_iteration=True).batch(batch_size=batch_size)\n", "model = train_model(Siamese, TripletLoss,text_vectorization, \n", " train_generator, \n", " val_generator, \n", " train_steps=train_steps,)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model was only trained for 2 steps because training the whole Siamese network takes too long, and produces slightly different results for each run. For the rest of the assignment you will be using a pretrained model, but this small example should help you understand how the training can be done." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "abKPe7d4wt1C" }, "source": [ "\n", "\n", "# Part 4: Evaluation \n", "\n", "\n", "\n", "### 4.1 Evaluating your siamese network\n", "\n", "In this section you will learn how to evaluate a Siamese network. You will start by loading a pretrained model, and then you will use it to predict. For the prediction you will need to take the output of your model and compute the cosine loss between each pair of questions." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "deletable": false, "editable": false, "scrolled": false, "tags": [] }, "outputs": [ { "ename": "RecursionError", "evalue": "maximum recursion depth exceeded in comparison", "output_type": "error", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[1;31mRecursionError\u001b[0m Traceback (most recent call last)", "Cell \u001b[1;32mIn[3], line 2\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mtensorflow\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01mtf\u001b[39;00m\n\u001b[1;32m----> 2\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[43mtf\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mkeras\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmodels\u001b[49m\u001b[38;5;241m.\u001b[39mload_model(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmodel/trained_model.keras\u001b[39m\u001b[38;5;124m'\u001b[39m, safe_mode\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m, \u001b[38;5;28mcompile\u001b[39m\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m)\n\u001b[0;32m 4\u001b[0m \u001b[38;5;66;03m# Show the model architecture\u001b[39;00m\n\u001b[0;32m 5\u001b[0m model\u001b[38;5;241m.\u001b[39msummary()\n", "File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:182\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n\u001b[0;32m 181\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_initialize()\n\u001b[1;32m--> 182\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_tfll_keras_version\u001b[49m \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mkeras_3\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m 183\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[0;32m 184\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mv1\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 185\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_submodule\n\u001b[0;32m 186\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m item\u001b[38;5;241m.\u001b[39mstartswith(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcompat.v1.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 187\u001b[0m ):\n\u001b[0;32m 188\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[0;32m 189\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` is not available with Keras 3. Keras 3 has \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 190\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mno support for TF 1 APIs. You can install the `tf_keras` package \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 193\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` to `tf_keras`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 194\u001b[0m )\n", "File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:182\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n\u001b[0;32m 181\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_initialize()\n\u001b[1;32m--> 182\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_tfll_keras_version\u001b[49m \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mkeras_3\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m 183\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[0;32m 184\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mv1\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 185\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_submodule\n\u001b[0;32m 186\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m item\u001b[38;5;241m.\u001b[39mstartswith(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcompat.v1.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 187\u001b[0m ):\n\u001b[0;32m 188\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[0;32m 189\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` is not available with Keras 3. Keras 3 has \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 190\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mno support for TF 1 APIs. You can install the `tf_keras` package \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 193\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` to `tf_keras`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 194\u001b[0m )\n", " \u001b[1;31m[... skipping similar frames: KerasLazyLoader.__getattr__ at line 182 (1488 times)]\u001b[0m\n", "File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:182\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n\u001b[0;32m 181\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_initialize()\n\u001b[1;32m--> 182\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_tfll_keras_version\u001b[49m \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mkeras_3\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m 183\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[0;32m 184\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mv1\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 185\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_submodule\n\u001b[0;32m 186\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m item\u001b[38;5;241m.\u001b[39mstartswith(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcompat.v1.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 187\u001b[0m ):\n\u001b[0;32m 188\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[0;32m 189\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` is not available with Keras 3. Keras 3 has \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 190\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mno support for TF 1 APIs. You can install the `tf_keras` package \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 193\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` to `tf_keras`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 194\u001b[0m )\n", "File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:178\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 177\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__getattr__\u001b[39m(\u001b[38;5;28mself\u001b[39m, item):\n\u001b[1;32m--> 178\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[43mitem\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m_tfll_mode\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m_tfll_initialized\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m_tfll_name\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m:\n\u001b[0;32m 179\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28msuper\u001b[39m(types\u001b[38;5;241m.\u001b[39mModuleType, \u001b[38;5;28mself\u001b[39m)\u001b[38;5;241m.\u001b[39m\u001b[38;5;21m__getattribute__\u001b[39m(item)\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n", "\u001b[1;31mRecursionError\u001b[0m: maximum recursion depth exceeded in comparison" ] } ], "source": [ "import tensorflow as tf\n", "model = tf.keras.models.load_model('model/trained_model.keras', safe_mode=False, compile=False)\n", "\n", "# Show the model architecture\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "QDi4MBiKpivF" }, "source": [ "\n", "### 4.2 Classify\n", "To determine the accuracy of the model, you will use the test set that was configured earlier. While in training you used only positive examples, the test data, `Q1_test`, `Q2_test` and `y_test`, is set up as pairs of questions, some of which are duplicates and some are not. \n", "This routine will run all the test question pairs through the model, compute the cosine similarity of each pair, threshold it and compare the result to `y_test` - the correct response from the data set. The results are accumulated to produce an accuracy; the confusion matrix is also computed to have a better understanding of the errors.\n", "\n", "\n", "\n", "### Exercise 04\n", "\n", "**Instructions** \n", " - Use a `tensorflow.data.Dataset` to go through the data in chunks with size batch_size. This time you don't need the labels, so you can just replace them by `None`,\n", " - use `predict` on the chunks of data.\n", " - compute `v1`, `v2` using the model output,\n", " - for each element of the batch\n", " - compute the cosine similarity of each pair of entries, `v1[j]`,`v2[j]`\n", " - determine if `d > threshold`\n", " - increment accuracy if that result matches the expected results (`y_test[j]`)\n", " \n", " Instead of running a for loop, you will vectorize all these operations to make things more efficient,\n", " - compute the final accuracy and confusion matrix and return. For the confusion matrix you can use the [`tf.math.confusion_matrix`](https://www.tensorflow.org/api_docs/python/tf/math/confusion_matrix) function. " ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "colab": {}, "colab_type": "code", "deletable": false, "id": "K-h6ZH507fUm", "tags": [ "graded" ] }, "outputs": [], "source": [ "# GRADED FUNCTION: classify\n", "def classify(test_Q1, test_Q2, y_test, threshold, model, batch_size=64, verbose=True):\n", " \"\"\"Function to test the accuracy of the model.\n", "\n", " Args:\n", " test_Q1 (numpy.ndarray): Array of Q1 questions. Each element of the array would be a string.\n", " test_Q2 (numpy.ndarray): Array of Q2 questions. Each element of the array would be a string.\n", " y_test (numpy.ndarray): Array of actual target.\n", " threshold (float): Desired threshold\n", " model (tensorflow.Keras.Model): The Siamese model.\n", " batch_size (int, optional): Size of the batches. Defaults to 64.\n", "\n", " Returns:\n", " float: Accuracy of the model\n", " numpy.array: confusion matrix\n", " \"\"\"\n", " y_pred = []\n", " test_gen = tf.data.Dataset.from_tensor_slices(((test_Q1, test_Q2),None)).batch(batch_size=batch_size)\n", " \n", " ### START CODE HERE ###\n", " \n", " for (batch_x1, batch_x2), _ in test_gen:\n", " # Get the outputs of the two branches of the Siamese network\n", " v1 = model.get_layer('sequential')(batch_x1)\n", " v2 = model.get_layer('sequential')(batch_x2)\n", " \n", " # Compute the cosine similarity\n", " d = tf.reduce_sum(v1 * v2, axis=1)\n", " \n", " # Make predictions based on the threshold\n", " batch_y_pred = tf.cast(d > threshold, tf.float64)\n", " y_pred.extend(batch_y_pred.numpy())\n", " \n", " # Calculate the accuracy\n", " y_pred = tf.convert_to_tensor(y_pred, dtype=tf.float64)\n", " accuracy = tf.reduce_mean(tf.cast(tf.equal(y_pred, y_test), tf.float64))\n", " \n", " # Compute the confusion matrix\n", " cm = tf.math.confusion_matrix(y_test, y_pred, num_classes=2)\n", " \n", "# pred = None\n", "# _, n_feat = None\n", "# v1 = model(q1)\n", "# v2 = None\n", " \n", "# # Compute the cosine similarity. Using `tf.math.reduce_sum`. \n", "# # Don't forget to use the appropriate axis argument.\n", "# d = None\n", "# # Check if d>threshold to make predictions\n", "# y_pred = tf.cast(d>threshold, tf.float64)\n", "# # take the average of correct predictions to get the accuracy\n", "# accuracy = None\n", "# # compute the confusion matrix using `tf.math.confusion_matrix`\n", "# cm = tf.math.confusion_matrix\n", " \n", " ### END CODE HERE ###\n", " \n", " return accuracy, cm" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "deletable": false, "editable": false, "id": "yeQjHxkfpivH", "outputId": "103b8449-896f-403d-f011-583df70afdae", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Accuracy 0.7259765625\n", "Confusion matrix:\n", "[[4876 1506]\n", " [1300 2558]]\n" ] } ], "source": [ "# this takes around 1 minute\n", "accuracy, cm = classify(Q1_test,Q2_test, y_test, 0.7, model, batch_size = 512) \n", "print(\"Accuracy\", accuracy.numpy())\n", "print(f\"Confusion matrix:\\n{cm.numpy()}\")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "CsokYZwhpivJ" }, "source": [ "### **Expected Result** \n", "Accuracy ~0.725\n", "\n", "Confusion matrix:\n", "```\n", "[[4876 1506]\n", " [1300 2558]]\n", " ```" ] }, { "cell_type": "code", "execution_count": 64, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[92mAll tests passed!\n" ] } ], "source": [ "# Test your function!\n", "w3_unittest.test_classify(classify, model)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "4-STC44Ywt1I" }, "source": [ "\n", "\n", "# Part 5: Testing with your own questions\n", "\n", "In this final section you will test the model with your own questions. You will write a function `predict` which takes two questions as input and returns `True` or `False` depending on whether the question pair is a duplicate or not. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "21h3Y0FNpivK" }, "source": [ "Write a function `predict` that takes in two questions, the threshold and the model, and returns whether the questions are duplicates (`True`) or not duplicates (`False`) given a similarity threshold. \n", "\n", "\n", "### Exercise 05\n", "\n", "\n", "**Instructions:** \n", "- Create a tensorflow.data.Dataset from your two questions. Again, labels are not important, so you simply write `None`\n", "- use the trained model output to create `v1`, `v2`\n", "- compute the cosine similarity (dot product) of `v1`, `v2`\n", "- compute `res` by comparing d to the threshold\n" ] }, { "cell_type": "code", "execution_count": 77, "metadata": { "colab": {}, "colab_type": "code", "deletable": false, "id": "kg0wQ8qhpivL", "tags": [ "graded" ] }, "outputs": [], "source": [ "# GRADED FUNCTION: predict\n", "def predict(question1, question2, threshold, model, verbose=False):\n", " \"\"\"Function for predicting if two questions are duplicates.\n", "\n", " Args:\n", " question1 (str): First question.\n", " question2 (str): Second question.\n", " threshold (float): Desired threshold.\n", " model (tensorflow.keras.Model): The Siamese model.\n", " verbose (bool, optional): If the results should be printed out. Defaults to False.\n", "\n", " Returns:\n", " bool: True if the questions are duplicates, False otherwise.\n", " \"\"\"\n", " generator = tf.data.Dataset.from_tensor_slices((([question1], [question2]),None)).batch(batch_size=1)\n", " \n", " ### START CODE HERE ###\n", " \n", " # Call the predict method of your model and save the output into v1v2\n", " v1v2 = model.predict(generator)\n", " out_size = v1v2.shape[1]\n", " # Extract v1 and v2 from the model output\n", " v1 = v1v2[:,:int(out_size/2)]\n", " v2 = v1v2[:,int(out_size/2):]\n", " print(v1.shape)\n", " # Take the dot product to compute cos similarity of each pair of entries, v1, v2\n", " # Since v1 and v2 are both vectors, use the function tf.math.reduce_sum instead of tf.linalg.matmul\n", " d = tf.reduce_sum(v1 * v2)\n", " # Is d greater than the threshold?\n", " res = d > threshold\n", "\n", " ### END CODE HERE ###\n", " \n", " if(verbose):\n", " print(\"Q1 = \", question1, \"\\nQ2 = \", question2)\n", " print(\"d = \", d.numpy())\n", " print(\"res = \", res.numpy())\n", "\n", " return res.numpy()" ] }, { "cell_type": "code", "execution_count": 78, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 102 }, "colab_type": "code", "deletable": false, "editable": false, "id": "Raojyhw3z7HE", "outputId": "b0907aaf-63c0-448d-99b0-012359381a97", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1/1 [==============================] - 0s 16ms/step\n", "(1, 128)\n", "Q1 = When will I see you? \n", "Q2 = When can I see you again?\n", "d = 0.8422112\n", "res = True\n" ] }, { "data": { "text/plain": [ "True" ] }, "execution_count": 78, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Feel free to try with your own questions\n", "question1 = \"When will I see you?\"\n", "question2 = \"When can I see you again?\"\n", "# 1 means it is duplicated, 0 otherwise\n", "predict(question1 , question2, 0.7, model, verbose = True)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "7OEKCa_hpivP" }, "source": [ "##### Expected Output\n", "If input is:\n", "```\n", "question1 = \"When will I see you?\"\n", "question2 = \"When can I see you again?\"\n", "```\n", "\n", "Output is (d may vary a bit):\n", "```\n", "1/1 [==============================] - 0s 13ms/step\n", "Q1 = When will I see you? \n", "Q2 = When can I see you again?\n", "d = 0.8422112\n", "res = True\n", "```" ] }, { "cell_type": "code", "execution_count": 79, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 102 }, "colab_type": "code", "deletable": false, "editable": false, "id": "DZccIQ_lpivQ", "outputId": "3ed0af7e-5d44-4eb3-cebe-d6f74abe3e41", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1/1 [==============================] - 0s 24ms/step\n", "(1, 128)\n", "Q1 = Do they enjoy eating the dessert? \n", "Q2 = Do they like hiking in the desert?\n", "d = 0.12625802\n", "res = False\n" ] }, { "data": { "text/plain": [ "False" ] }, "execution_count": 79, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Feel free to try with your own questions\n", "question1 = \"Do they enjoy eating the dessert?\"\n", "question2 = \"Do they like hiking in the desert?\"\n", "# 1 means it is duplicated, 0 otherwise\n", "predict(question1 , question2, 0.7, model, verbose=True)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "lWrt-yCMpivS" }, "source": [ "##### Expected output\n", "\n", "If input is:\n", "```\n", "question1 = \"Do they enjoy eating the dessert?\"\n", "question2 = \"Do they like hiking in the desert?\"\n", "```\n", "\n", "Output (d may vary a bit):\n", "\n", "```\n", "1/1 [==============================] - 0s 12ms/step\n", "Q1 = Do they enjoy eating the dessert? \n", "Q2 = Do they like hiking in the desert?\n", "d = 0.12625802\n", "res = False\n", "\n", "False\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "NAfV3l5Zwt1L" }, "source": [ "You can see that the Siamese network is capable of catching complicated structures. Concretely it can identify question duplicates although the questions do not have many words in common. \n", " " ] }, { "cell_type": "code", "execution_count": 80, "metadata": { "deletable": false, "editable": false, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1/1 [==============================] - 1s 556ms/step\n", "(1, 128)\n", "1/1 [==============================] - 0s 16ms/step\n", "(1, 128)\n", "1/1 [==============================] - 0s 23ms/step\n", "(1, 128)\n", "1/1 [==============================] - 0s 16ms/step\n", "(1, 128)\n", "1/1 [==============================] - 0s 16ms/step\n", "(1, 128)\n", "\u001b[92mAll tests passed!\n" ] } ], "source": [ "# Test your function!\n", "w3_unittest.test_predict(predict, model)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "FsE8tdTLwt1M" }, "source": [ "\n", "\n", "### On Siamese networks\n", "\n", "Siamese networks are important and useful. Many times there are several questions that are already asked in quora, or other platforms and you can use Siamese networks to avoid question duplicates. \n", "\n", "Congratulations, you have now built a powerful system that can recognize question duplicates. In the next course we will use transformers for machine translation, summarization, question answering, and chatbots. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# " ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "machine_shape": "hm", "name": "C3_W4_Assignment_Solution.ipynb", "provenance": [], "toc_visible": true }, "coursera": { "schema_names": [ "NLPC3-4A" ] }, "grader_version": "1", "kernelspec": { "display_name": "seasme", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.7" } }, "nbformat": 4, "nbformat_minor": 4 }