{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"provenance":[{"file_id":"1yobwhLkaPORJrTb9u5EhXHpeWzx6dyyv","timestamp":1742998181191}],"gpuType":"T4"},"kernelspec":{"name":"python3","display_name":"Python 3"},"language_info":{"name":"python"},"accelerator":"GPU"},"cells":[{"cell_type":"code","source":["!pip install datasets\n","\n","from IPython.display import clear_output\n","from datasets import load_dataset\n","import tensorflow as tf\n","import time\n","clear_output()  # Clears the output"],"metadata":{"id":"sYqG7iVwXEX0"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["# @title Run me to mount drive\n","from google.colab import drive\n","drive.mount('/content/drive/', force_remount=True)\n","\n","path='/content/drive/MyDrive/A&T_workshop/data'\n","\n","import sys\n","\n","sys.path.append(path)\n","import os\n","os.chdir(path)\n","!pwd"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"Gek5RlZ5ENoQ","executionInfo":{"status":"ok","timestamp":1743421832438,"user_tz":-60,"elapsed":2323,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"e7415907-ba69-4146-acfd-c669da003083"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["Mounted at /content/drive/\n","/content/drive/MyDrive/A&T_workshop/data\n"]}]},{"cell_type":"markdown","metadata":{"id":"Xboa1L2w8YWF"},"source":["### **Lab Are you ready to build your own Small Language Model?** - To comment - **Jonathan** 4 revised activities - *Annie (if time)**\n","\n","1.12 Coding Activity -  Load the dataset\n","\n","**Dataset**\n","\n","For this activity, we will use the [tiny stories dataset](https://arxiv.org/pdf/2305.07759) (TinyStories), a synthetic dataset of short stories in English that only contain words that a typical 3 to 4-year-olds usually understand, generated by a Large Language Model (GPT-3.5 and GPT-4). TinyStories can be used to train and evaluate language models that are much smaller (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar."]},{"cell_type":"markdown","source":["**Step 1: Load the Dataset**\n","\n","\n","To load the dataset, we'll leverage the [huggingface datasets](https://huggingface.co/docs/datasets/en/index) package."],"metadata":{"id":"aS8lTu4FDclK"}},{"cell_type":"markdown","source":["This dataset contains over 2 million short stories. Depending on your hardware and available compute resources, training on the entire dataset might not be feasible. Therefore, you can choose a subset of stories to train with based on your resources."],"metadata":{"id":"pGC5h7keT1GR"}},{"cell_type":"code","source":["import pandas as pd\n","\n","stories_data=pd.read_json('africa_galore.json')\n","\n","# qa_dataset = load_dataset(\"Svngoku/Global-African-History-QA\", split=\"train\")\n","# qa_df = pd.DataFrame(qa_dataset['answer'], columns=[\"description\"])\n","\n","# stories=pd.concat([stories_data['description'],qa_df]).reset_index()\n","# stories=stories['description']\n","train_dataset = stories_data['description']\n","print(train_dataset.shape)"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"x8rRtd8p2DQ6","executionInfo":{"status":"ok","timestamp":1743421832495,"user_tz":-60,"elapsed":56,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"8a845dcf-49a9-45a1-d5ea-bade82cc38a0"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["(239,)\n"]}]},{"cell_type":"code","execution_count":null,"metadata":{"id":"pLOBinv_gpTH"},"outputs":[],"source":["# tiny_stories_dataset = load_dataset(\"roneneldan/TinyStories\", split=\"train\")\n","# clear_output() # clears the output\n","\n","# # print(f\"Number of stories in the TinyStories train dataset: {tiny_stories_dataset.num_rows}\")\n","\n","# # The dataset contains over 2 million short children stories\n","# stories = tiny_stories_dataset['text']\n","\n","# number_of_stories_for_training = 1000 #@param {type: \"number\"}\n","# train_dataset = stories[:number_of_stories_for_training]\n","\n","# print(f\"Number of stories in the TinyStories train dataset we will be using for training: {len(train_dataset)}\")\n"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"WMZ9f1lJOMmF","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832514,"user_tz":-60,"elapsed":12,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"5290fc10-0fb0-4947-8e70-e3369e508e33"},"outputs":[{"output_type":"stream","name":"stdout","text":["Didier Drogba, a name that resonates with football fans worldwide, is an Ivorian legend who transcended the sport to become a symbol of hope and unity for his nation. His powerful presence on the pitch, his clinical finishing, and his ability to rise to the occasion made him one of the most feared strikers of his generation.  Drogba's impact extended beyond club football.  He captained the Ivory Coast national team, leading them to multiple World Cup appearances and becoming their all-time leading scorer.  His influence helped bring a period of peace to his war-torn country, demonstrating the unifying power of sport.  More than just a footballer, Drogba is a humanitarian and a national icon, revered for his contributions both on and off the field.\n"]}],"source":["# Let's look at the first story\n","print(train_dataset[0])"]},{"cell_type":"markdown","metadata":{"id":"cueGiylBS5Hy"},"source":["**Step 2 - Convert the Text Sequence to Tokens**\n","\n","In this step, we'll focus on \"tokenization\", which is the process of converting a sequence of text into smaller, manageable units known as *tokens*. For our purposes, we will be tokenizing the text sequence into **individual words**. This is a common approach when building a word-level Transformer model.\n","\n","**Why word-level?**\n","Word-level tokenization is one of the simplest forms of tokenization, and it works well for many tasks. It treats each word as a unit of meaning, which is useful when the model learns associations between specific words. In the next module, you'll explore other tokenization strategies, such as Subword tokenization (useful for handling rare words or languages with complex word structures) and Character-level tokenization (treating each character as a token, which is useful for languages with rich morphology or to handle unknown words)."]},{"cell_type":"markdown","source":["Let's use the function we developed in last section to split the text sequence into word tokens. It splits words on space so when you tokenize a sentence like \"Bimpe didn't come home yesterday.\" you'll get a list of tokens like [\"Bimpe\", \"didn't\", \"come\", \"home\", \"yesterday.\"].\n","\n","*Run the cell below*"],"metadata":{"id":"gAHfe7xMPmcz"}},{"cell_type":"code","source":["#Write a function to split text into words\n","def split_text(text: str)-> list[str]:\n","  # your code here\n","  # split text on whitespace\n","  words = ... # update me\n","  return words"],"metadata":{"id":"qdM6kGwkhBnh"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["# @title Run me to test your code\n","# def test_split_text():\n","#   hint = \"\"\"\n","#         Hints:\n","#         ======\n","#         split a text to list of words. for example \"hello world\" become ['hello', 'world'] where we have splitted on whitespace.\n","#         There is a Python `split` function you can use.\n","#     \"\"\"\n","\n","#   if split_text('hello world') == ['hello', 'world']:\n","#     print(\"Nice! Your answer looks correct.\")\n","#   else:\n","#     print(\"\\033[1m\\033[91mSorry, your answer is not correct.\\033[0m\")\n","#     give_hints = input(\"Would you like some hints? type yes or no \")\n","#     if give_hints.lower() in ['yes', \"y\"]:\n","#       print(f\"{hint}\")\n","\n","\n","# test_split_text()\n","# assert split_text('hello world') == ['hello', 'world'], '`split_text` function is not implemented correctly. Try again.'"],"metadata":{"id":"uwCV56YPi2bf"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["# @title split_text function solution (Try not to peek until you've given it a good try!')\n","def split_text(text: str)-> list[str]:\n","  words = text.split(\" \") # split text on whitespace\n","  return words\n"],"metadata":{"id":"hHEW83gOa9gM"},"execution_count":null,"outputs":[]},{"cell_type":"code","execution_count":null,"metadata":{"id":"oqGWDrXokzbj","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832602,"user_tz":-60,"elapsed":5,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"f254bdd6-ade9-435e-d3ba-f04097e6614a"},"outputs":[{"output_type":"stream","name":"stdout","text":["Total number of words in our train dataset: 20063\n"]}],"source":["words = [word for story in train_dataset for word in split_text(story)]\n","print(\"Total number of words in our train dataset:\",len(words))"]},{"cell_type":"code","source":["words[:20] #print out the first 20 words"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"Pdr8290eEum9","executionInfo":{"status":"ok","timestamp":1743421832603,"user_tz":-60,"elapsed":1,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"ddb59324-5879-42c5-f003-28cae5053b62"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["['Didier',\n"," 'Drogba,',\n"," 'a',\n"," 'name',\n"," 'that',\n"," 'resonates',\n"," 'with',\n"," 'football',\n"," 'fans',\n"," 'worldwide,',\n"," 'is',\n"," 'an',\n"," 'Ivorian',\n"," 'legend',\n"," 'who',\n"," 'transcended',\n"," 'the',\n"," 'sport',\n"," 'to',\n"," 'become']"]},"metadata":{},"execution_count":160}]},{"cell_type":"markdown","metadata":{"id":"OWRQLYO7hhp4"},"source":["**Step 3: Create a Vocabulary Comprising of Unique Words**\n","\n","Vocabulary is the set of unique words that the model recognizes and processes during training and inference. These words are the building blocks the model uses to understand and generate text data. The vocabulary defines what the model \"knows\" in terms of language input and output."]},{"cell_type":"code","source":["def get_vocab(words: list[str])-> list[str]:\n","  # your code here\n","  # create a vocabulary list from the set of words\n","  vocab = ... # update me\n","  return vocab"],"metadata":{"id":"ueYSAjGzj2jp"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["# @title Run me to test your code\n","# def test_get_vocab():\n","#   hint = \"\"\"\n","#         Hints:\n","#         ======\n","#         1. Create a unique set of words e.g, if you have ['hello', 'world', 'world'], it becomes {'hello', 'world'}. There is a Python `set` function you can use.\n","#         2. Convert the set to list e.g {'hello', 'world'} becomes ['hello', 'world']. There is a Python `list` function you can use.\n","#     \"\"\"\n","\n","#   if get_vocab(['hello', 'world', 'world']) == ['hello', 'world']:\n","#     print(\"Nice! Your answer looks correct.\")\n","#   elif type(get_vocab(['hello', 'world', 'world'])) == set:\n","#     print(\"\\033[1m\\033[91mSorry, your answer is not correct. Make sure you return a list, not a set.\\033[0m\")\n","#     give_hints = input(\"Would you like some hints? type yes or no \")\n","#     if give_hints.lower() in ['yes', \"y\"]:\n","#       print(f\"{hint}\")\n","#   else:\n","#     print(\"\\033[1m\\033[91mSorry, your answer is not correct.\\033[0m\")\n","#     give_hints = input(\"Would you like some hints? type [yes or no] \")\n","#     if give_hints.lower() in ['yes', \"y\"]:\n","#       print(f\"{hint}\")\n","\n","# test_get_vocab()\n","# assert get_vocab(['hello', 'world', 'world']) == ['hello', 'world'], '`get_vocab` function is not implemented correctly. Try again.'"],"metadata":{"id":"q9Fk7rH5kF58"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["# @title get_vocab function solution (Try not to peek until you've given it a good try!')\n","def get_vocab(words: list[str])-> list[str]:\n","  # your code here\n","  # create a vocabulary list from the set of words\n","  vocab = list(set(words)) # update me\n","  return vocab"],"metadata":{"id":"ZeJlbyiBjla4"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["vocab = get_vocab(words)\n","vocab_size = len(vocab) # Size of the vocabulary (number of unique words).\n","print(vocab_size)"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"t-mULo_viTXK","executionInfo":{"status":"ok","timestamp":1743421832661,"user_tz":-60,"elapsed":23,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"61fbcd9f-af17-4465-fc29-9a48a24b4c48"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["5450\n"]}]},{"cell_type":"code","source":["vocab[:10] # the first 10 words in the vocabulary"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"YQFWt9KQGN6i","executionInfo":{"status":"ok","timestamp":1743421832661,"user_tz":-60,"elapsed":4,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"c9ab35b6-7028-4fab-ed4b-9ced17a66be6"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["['',\n"," 'savannas',\n"," 'melodies,',\n"," 'monumental',\n"," 'allow',\n"," 'healer',\n"," 'Prize,',\n"," 'soups',\n"," 'stories.',\n"," 'throng,']"]},"metadata":{},"execution_count":165}]},{"cell_type":"markdown","metadata":{"id":"_BMsImtfunNb"},"source":["**Step 4: Add Special Unknown Token to the Vocabulary to Handle Unseen Words**\n","\n","In natural language processing, it's common for a model to encounter unseen words—words that didn't appear in the training dataset. To handle this, we introduce a special token called the unknown token `<UNK>`. This token allows the model to effectively manage words that are not in its vocabulary.\n","\n","The `<UNK>` token serves as a placeholder for words that the model has not seen during training. When the model encounters an unseen word, it will replace it with `<UNK>`. This helps the model handle out-of-vocabulary (OOV) words, ensuring that it can still generate reasonable outputs even if it hasn't learned the specific word.\n","\n","For example, if your vocabulary includes words like \"apple\", \"dog\", and \"car\", but you encounter a compound word like \"keke-marwa\" that the model hasn't seen before, it will substitute \"keke-marwa\" with `<UNK>`."]},{"cell_type":"markdown","source":["Below we add the unknown token `<UNK>` to our list of unique words, which is currently represented by the variable, vocab"],"metadata":{"id":"-qj05jdIKaFd"}},{"cell_type":"code","execution_count":null,"metadata":{"id":"4nzQiIV0vAwe","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832673,"user_tz":-60,"elapsed":12,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"a4e87ca9-6f5d-4f4d-9a2c-a9bfdd688a70"},"outputs":[{"output_type":"stream","name":"stdout","text":["5451\n"]}],"source":["UNKNOWN_TOKEN = '<UNK>'\n","vocab = vocab +  [UNKNOWN_TOKEN]\n","vocab_size = len(vocab) # update the vocab size\n","print(vocab_size)"]},{"cell_type":"markdown","source":["The size of our vocabulary has increased by 1.\n","\n","Let's print out the last 10 words in our vocabulary"],"metadata":{"id":"gz9VRoR8bqSA"}},{"cell_type":"code","source":["vocab[-10:]"],"metadata":{"id":"vq1uKR8pbyHj","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832701,"user_tz":-60,"elapsed":28,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"bb9bb62d-2efc-4b9e-a0d5-201f5adbf941"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["['coals,',\n"," 'bar',\n"," 'rapidly,',\n"," 'anticipation.',\n"," 'lush',\n"," 'streets,',\n"," 'importance',\n"," 'where',\n"," 'aroma.',\n"," '<UNK>']"]},"metadata":{},"execution_count":167}]},{"cell_type":"markdown","source":["We now see the special unknown token `<UNK>` on the very bottom."],"metadata":{"id":"ej4MUpn_b3lp"}},{"cell_type":"markdown","source":["**Step 5: Convert the Word Tokens into Numerical Representation**\n","\n","Since computers work with numbers, we need to convert words into numerical representations. This allows the model to process and understand the text data. This steps is also referred to as \"vectorization\".\n","\n","To achieve this, we'll assign a **numerical value** (or **index**) to each word in the vocabulary.\n","\n","We'll create two dictionaries to facilitate this mapping:\n","\n","1. **`index_to_word`**: This dictionary maps an index (a number) back to its corresponding word. Given an index between 0 and the vocabulary size, it returns the word at that position.\n","2. **`word_to_index`**: This dictionary maps each word in the vocabulary to its corresponding numerical value (index). It returns the index for a given word.\n","\n","Now, whenever we need to convert a word to a number, we use `word_to_index`, and when we need to convert a number back to a word, we use `index_to_word` which you'll write.\n"],"metadata":{"id":"-Z6vtwjGLQcH"}},{"cell_type":"code","source":["# Note the index here is starting with 1.\n","# We are reserving the first index for another special token <PAD> explained below.\n","word_to_index = {word: index+1 for index, word in enumerate(vocab)}\n","\n","# your code here\n","# create a dictionary that maps an index (a number) back to its corresponding word in the vocab, starting the index with 1.\n","index_to_word = ... # update me"],"metadata":{"id":"ejO4vHqkjBWd"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["# lets get the indices for the special unknown '<UNK>' token\n","word_to_index[UNKNOWN_TOKEN]"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"F3_Xr9tQkx7I","executionInfo":{"status":"ok","timestamp":1743421832704,"user_tz":-60,"elapsed":2,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"b2fb1ee8-f9f7-4544-84f2-a554a56ebb65"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["5451"]},"metadata":{},"execution_count":169}]},{"cell_type":"code","source":["# @title index_to_word solution (Try not to peek until you've given it a good try!')\n","index_to_word = {index+1: word for index, word in enumerate(vocab)}"],"metadata":{"id":"Xan-_ZYxtUpk"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["index_to_word[word_to_index[UNKNOWN_TOKEN]]"],"metadata":{"colab":{"base_uri":"https://localhost:8080/","height":35},"id":"Dzkwp_2lt0_6","executionInfo":{"status":"ok","timestamp":1743421832706,"user_tz":-60,"elapsed":1,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"3be79c72-d959-4590-e9f9-02281ea7fbdd"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["'<UNK>'"],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":171}]},{"cell_type":"markdown","source":["**Encoding and Decoding function**\n","\n","We will create two functions: `encoding` and `decoding`.\n","- The `encoding` function takes a word from the vocabulary and returns its corresponding index. Whenever it encounters a word not in the vocab, it will return the index of the `<UNK>` token.\n","- The `decoding` function, will take an index and return the token (word) associated with it.\n"],"metadata":{"id":"VgOkb5VSwnbm"}},{"cell_type":"code","source":["def encoding(word: str) -> int:\n","  return word_to_index.get(word, word_to_index[UNKNOWN_TOKEN])\n","\n","def decoding(number: int) -> str:\n","  return index_to_word.get(number, UNKNOWN_TOKEN)"],"metadata":{"id":"JC_dnmc2xjBP"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["Now, let's encode all the words i.e convert all the words to number using our `encoding` function."],"metadata":{"id":"L59DoPZUgYw-"}},{"cell_type":"code","source":["encoded_words = [ [encoding(word) for word in split_text(story)] for story in train_dataset ]\n","encoded_words[0][:10] # print the first 10 encoded words in the first story"],"metadata":{"id":"C3Eq-7HxgJg8","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832735,"user_tz":-60,"elapsed":11,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"d1bfafca-38a3-45df-f857-d331e4f5d63a"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["[4124, 1941, 4691, 4456, 5199, 1169, 4636, 712, 4947, 5396]"]},"metadata":{},"execution_count":173}]},{"cell_type":"code","source":["#let's check if everything is as we expect\n","print(\"word | encoding | decoding\")\n","print(\"---- | -------- | --------\")\n","for word, index in zip(words[:10], encoded_words[0][:10]):\n","  print(f\"'{word}' |  {index} | '{decoding(index)}'\")"],"metadata":{"id":"YFG5nUqs8X54","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832740,"user_tz":-60,"elapsed":2,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"dabe0ea0-0ffb-4014-dd05-bc206a42be59"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["word | encoding | decoding\n","---- | -------- | --------\n","'Didier' |  4124 | 'Didier'\n","'Drogba,' |  1941 | 'Drogba,'\n","'a' |  4691 | 'a'\n","'name' |  4456 | 'name'\n","'that' |  5199 | 'that'\n","'resonates' |  1169 | 'resonates'\n","'with' |  4636 | 'with'\n","'football' |  712 | 'football'\n","'fans' |  4947 | 'fans'\n","'worldwide,' |  5396 | 'worldwide,'\n"]}]},{"cell_type":"markdown","source":["The `encoding` and the `decoding` are the same for the words that are in the vocab."],"metadata":{"id":"nq0kuBG22NZt"}},{"cell_type":"markdown","source":["**Convert the list of encoded_words to Tensor array**\n","\n","Tensors are multi-dimensional arrays optimized for use in deep learning frameworks such as Jax, Keras, PyTorch and TensorFlow. Tensors allow for efficient mathematical operations and are a standard input format for machine learning models."],"metadata":{"id":"Ub5kMzgQs-B5"}},{"cell_type":"code","source":["# encoded_tensor = tf.convert_to_tensor(encoded_words, dtype=tf.int32)"],"metadata":{"id":"UEUXl_NlsO0T"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["What just happened? The above cell threw a `ValueError`, saying \"can't convert non-rectangular Python sequence to Tensor.\" What does this mean?\n","\n","This error occurs because TensorFlow requires that the lists (or sequences) you're trying to convert into a tensor have the same size—i.e., they must form a **rectangular** shape. In simpler terms, the inner lists must have equal lengths to be converted into a tensor.\n","\n","Let's consider this example: `[[1, 2, 3], [1, 2]]`\n","\n","\n","If we try to convert this to a tensor, we will encounter an error because the two inner lists have different lengths. To resolve this, we need to ensure that all the lists have the same size. We can achieve this in two ways:\n","\n","1. **Padding**: We can add \"dummy\" values (like `-1`, `0`, etc.) to the shorter list to make it the same length as the longer one. For example: `[[1, 2, 3], [1, 2, 0]]`\n","\n","   In this case, we added `0` to the second list as a placeholder, assuming that `0` is not part of the meaningful data.\n","\n","2. **Truncation**: Alternatively, we can shorten the longer list to match the length of the shorter one. For example: `[[1, 2], [1, 2]]`\n","\n","   Here, we truncated the first list to match the length of the second list.\n","\n","This leads us to the concept of **padding**, which ensures that all sequences have the same length, allowing them to be converted into a uniform tensor. Let's explore padding in more detail below."],"metadata":{"id":"uOS2e4YAuC9i"}},{"cell_type":"markdown","source":["**Step 6a: Add Special Padding Token to Handle Varying Input Lengths**\n","\n","The **padding** process ensures that sequences of varying lengths are all the same size. This is done by adding special token `<PAD>` to shorter sequences so that they align with the longest sequence in the dataset. Padding is necessary to create consistent input for the model (i.e input of the same shape).\n","\n"],"metadata":{"id":"X8zWgAENq1SV"}},{"cell_type":"markdown","source":["Before we pad the token, we first need to update our vocabulary to include the special pad token `<PAD>`. We will add it to the top of our vocab so it's index will be `0`"],"metadata":{"id":"JblgHC9d7KM-"}},{"cell_type":"code","source":["PAD_TOKEN = '<PAD>'\n","vocab =  [PAD_TOKEN] + vocab\n","vocab_size = len(vocab) # update the vocab size\n","print(vocab_size)"],"metadata":{"id":"iNlH82HZChfA","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832753,"user_tz":-60,"elapsed":13,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"960da3ed-fc62-47ed-b1a3-b473a0819760"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["5452\n"]}]},{"cell_type":"code","source":["vocab[:10] # print the first 10 words"],"metadata":{"id":"AHerURefC3Ln","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832777,"user_tz":-60,"elapsed":24,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"75ae1265-cc2c-43ee-86fc-4f692d85f899"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["['<PAD>',\n"," '',\n"," 'savannas',\n"," 'melodies,',\n"," 'monumental',\n"," 'allow',\n"," 'healer',\n"," 'Prize,',\n"," 'soups',\n"," 'stories.']"]},"metadata":{},"execution_count":177}]},{"cell_type":"markdown","source":["We also need to update the `word_to_index` and `index_to_word` dictionaries. The pad token is generally added at the first index."],"metadata":{"id":"_n15alNPC_ch"}},{"cell_type":"code","source":["index_to_word[0] = PAD_TOKEN\n","word_to_index[PAD_TOKEN] = 0"],"metadata":{"id":"nGpfFrWfC8S3"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["encoding(PAD_TOKEN)"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"_uN_T-py3N_3","executionInfo":{"status":"ok","timestamp":1743421832792,"user_tz":-60,"elapsed":13,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"9cf0711f-485d-4d92-8f96-66739ac0e395"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["0"]},"metadata":{},"execution_count":179}]},{"cell_type":"markdown","source":["Before we go ahead to pad our sequences. Let's create a handy Python class that puts together everything we've just learned about tokenizing the text data and encoding into numbers. We will call this class SimpleWordTokenizer. In course 2, we will introduce other types of tokenization which follow similar structure as our `SimpleWordTokenizer` class."],"metadata":{"id":"kllG8M7e4UjH"}},{"cell_type":"code","source":["# Putting it all together.\n","class SimpleWordTokenizer:\n","    \"\"\"\n","    A simple tokenizer that converts text into sequences of\n","    indices based on a vocabulary.\n","\n","    Args:\n","        texts: Input text dataset\n","        vocab: A pre-defined vocabulary.\n","    \"\"\"\n","\n","    def __init__(self, texts: list[str], vocab: list[str] | None = None):\n","        \"\"\"Initializes the tokenizer with a provided vocabulary.\n","\n","        Args:\n","            vocab: A pre-defined vocabulary.\n","        Tests:\n","        tokenizer = SimpleWordTokenizer(vocab)\n","        example_text = \"Hello there!\"\n","        assert tokenizer.vocab == vocab\n","        assert tokenizer.encode(example_text)\n","               == [word_to_index.get(word, 0)\n","               for word in split_text(example_text)]\n","        assert train_dataset[0][:10]\n","                == tokenizer.decode(tokenizer.encode(train_dataset[0][:10]))\n","        \"\"\"\n","        # Tokenize the sequences into individual words\n","\n","        self.unknown_token = \"<UNK>\"\n","        self.pad_token = \"<PAD>\"\n","\n","        if vocab is None:\n","          if isinstance(texts, str):\n","            texts = [texts]\n","          # step 2: convert text sequence to tokens\n","          words = [word for text in texts for word in self.split_text(text)]\n","\n","          # step 3: create a vocabulary comprising of unique words\n","          vocab = self.get_vocab(words)\n","\n","          # step 4 and 6: add special unknown and pad token\n","          self.vocab = [self.pad_token] + vocab +  [self.unknown_token]\n","        else:\n","          self.vocab = vocab\n","\n","        # Size of vocabulary\n","        self.vocab_size = len(self.vocab)\n","\n","        # Create word-to-index and index-to-word mappings\n","        self.word_to_index = {word: index\n","                              for index, word in enumerate(self.vocab)}\n","        self.index_to_word = {index: word\n","                              for index, word in enumerate(self.vocab)}\n","\n","        self.pad_token_id = self.encode(self.pad_token)[0]\n","        self.unknown_token_id = self.encode(self.unknown_token)[0]\n","\n","    def split_text(self, text: str) -> list[str]:\n","        \"\"\"Splits a given text into words.\"\"\"\n","        return text.split(\" \")\n","\n","    def join_text(self, text_lists: list[str]) -> str:\n","        \"\"\"Combines a list of words into a single string,\n","            with words separated by spaces.\n","        \"\"\"\n","        return \" \".join(text_lists)\n","\n","    def get_vocab(self, words: list[str])-> list[str]:\n","      \"\"\"Create a vocabulary list from the set of words\"\"\"\n","      vocab = list(set(words))\n","      return vocab\n","\n","    def encoding(self, word: str) -> int:\n","      \"\"\"Gets index of word if it exists, otherwise return unknown token id.\"\"\"\n","      return self.word_to_index.get(word,\n","                                    self.word_to_index[self.unknown_token])\n","\n","    def decoding(self, number: int) -> str:\n","      \"\"\"Gets word associated with the index.\"\"\"\n","      return self.index_to_word.get(number, self.unknown_token)\n","\n","    def encode(self, text: str) -> list[int]:\n","        \"\"\"Encodes a text sequence into a list of indices based on the vocabulary.\n","\n","        Args:\n","            text: The input text to be encoded.\n","\n","        Returns:\n","            list: A list of indices corresponding to the words in the\n","                  input text.\n","        \"\"\"\n","\n","        # step 5: convert word tokens into numerical representation\n","        encoded_words = [self.encoding(word) for word in self.split_text(text)]\n","        return encoded_words\n","\n","    def decode(self, numbers: int | list[int]) -> str:\n","        \"\"\"Decodes a list (or single index) of integers back into\n","        corresponding words from the vocabulary.\n","\n","        Args:\n","            numbers: A single index or a list of indices to be\n","                     decoded into words.\n","\n","        Returns:\n","            str: A string of decoded words corresponding to the input indices.\n","        \"\"\"\n","        # If a single integer is passed, convert it into a list\n","        if isinstance(numbers, int):\n","            numbers = [numbers]\n","\n","        # Map indices to words\n","        words = [self.decoding(number) for number in numbers]\n","\n","        # Join the decoded words into a single string\n","        return self.join_text(words)"],"metadata":{"id":"nPk5t3kUwpOy"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["Let's verify that the class we created returns the same vocab as the one we made before"],"metadata":{"id":"mgV0fzQB5LrN"}},{"cell_type":"code","source":["tokenizer = SimpleWordTokenizer(train_dataset)\n","assert tokenizer.vocab == vocab\n","assert tokenizer.decode(tokenizer.encode(train_dataset[0])) == train_dataset[0]"],"metadata":{"id":"0rMKmNlRY-3q"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["tokenizer.decode(tokenizer.encode(train_dataset[0]))"],"metadata":{"colab":{"base_uri":"https://localhost:8080/","height":209},"id":"HjS_GGvXcvlu","executionInfo":{"status":"ok","timestamp":1743421832905,"user_tz":-60,"elapsed":18,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"d8f79211-5679-4093-e69a-04c7d7f6b2bf"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["\"Didier Drogba, a name that resonates with football fans worldwide, is an Ivorian legend who transcended the sport to become a symbol of hope and unity for his nation. His powerful presence on the pitch, his clinical finishing, and his ability to rise to the occasion made him one of the most feared strikers of his generation.  Drogba's impact extended beyond club football.  He captained the Ivory Coast national team, leading them to multiple World Cup appearances and becoming their all-time leading scorer.  His influence helped bring a period of peace to his war-torn country, demonstrating the unifying power of sport.  More than just a footballer, Drogba is a humanitarian and a national icon, revered for his contributions both on and off the field.\""],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":182}]},{"cell_type":"markdown","source":["So now, we can simply use the tokenizer to encode the text data. It will perform the step 2-6a above."],"metadata":{"id":"XTpLL64I5P2g"}},{"cell_type":"code","source":["encoded_words = [tokenizer.encode(text) for text in train_dataset]"],"metadata":{"id":"ukMJSFQai7IZ"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["**Step 6b: Pad the Tokens to Desired Length**\n","\n","We are now ready to pad our sequence of encoded words."],"metadata":{"id":"38uEXAhL6Rmc"}},{"cell_type":"markdown","source":["The Padding ```<PAD>``` token is used to ensure that all sequences have the same length. Our stories have varying lengths but neural networks expect inputs to have a uniform shape. So shorter stories need to be padded to match the longest stories so that all inputs to the network follow the same dimensions. The Transformer model takes in each story as it's context and learns the relationship between the tokens."],"metadata":{"id":"H2HzW94G7PVG"}},{"cell_type":"markdown","source":["However, this method raises several questions: What exactly should be padded? Should we pad every sentence so that each one reaches the same length, or is it more effective to pad entire stories so that their overall structure remains intact? The answer largely depends on the specific modeling goal. Since the context and narrative flow of a complete story are crucial, then padding the whole story might be more beneficial to preserve that context."],"metadata":{"id":"B9enUxF-7RIq"}},{"cell_type":"markdown","source":["Let's count the maximum and minimum number of words in a story in our dataset to determine the length we want to pad up to."],"metadata":{"id":"oCEiSccJ9Y-3"}},{"cell_type":"code","source":["story_length = [(len(story)) for story in encoded_words]\n","min(story_length), max(story_length)"],"metadata":{"id":"NsDUUbJw9gXh","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832943,"user_tz":-60,"elapsed":3,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"129e20e4-72ad-4e67-a471-b088bea6ddb3"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["(26, 320)"]},"metadata":{},"execution_count":184}]},{"cell_type":"code","source":["print('length of first story:', len(encoded_words[0] ))\n","print('First 10 tokens:', encoded_words[0][:10])"],"metadata":{"id":"e_RgRB6gvL6B","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832948,"user_tz":-60,"elapsed":5,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"5ab78308-7b4f-4f31-d4e7-2d991e427d60"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["length of first story: 128\n","First 10 tokens: [4124, 1941, 4691, 4456, 5199, 1169, 4636, 712, 4947, 5396]\n"]}]},{"cell_type":"markdown","source":["So, we need to make all sequences equal length.\n","We can either trucate to the shortest story or add padding to make all stories match the length of the longest story. The former results in a loss of context as all stories except the last story will lose words and be truncted down. Adding extra `<PAD>` to all stories except the longest will retain context but might add a slight memory and compute overhead. You may also investigate the distribution of story lengths and find a maximum length that retains full context for most stories."],"metadata":{"id":"amMbFyNtTny5"}},{"cell_type":"code","source":["import tensorflow as tf\n","\n","maxlen = 312 #@param {type:\"number\"}\n","\n","# PAD_TOKEN_ID = encoding(PAD_TOKEN)\n","PAD_TOKEN_ID = tokenizer.pad_token_id\n","longest_story_length = max(story_length)\n","\n","# Ensure the maxlen is positive\n","assert maxlen > 0, \"Max length must be greater than 0. Increase the `maxlen`\"\n","assert maxlen <= longest_story_length, f\"Note: The padding token {PAD_TOKEN_ID} will be added to sequences longer than the longest story - You probably don't want that. Reduce the `maxlen`\"\n","\n","# Check if the maxlen is shorter or longer than the longest story\n","if maxlen < longest_story_length:\n","    print(f\"\\033[33mWarning: The longest story has {longest_story_length} words, but `maxlen` is set to {maxlen}. As a result, stories longer than `maxlen` will be truncated.\\033[0m\")\n","\n","padded_sequences = tf.keras.preprocessing.sequence.pad_sequences(encoded_words, maxlen=maxlen, padding='pre',truncating='pre' ,value=PAD_TOKEN_ID)\n","print(\"New length of first story:\",len(padded_sequences[0]),'\\n')\n","\n","print(\"Padding makes the length of all sequences the same as the specified `maxlen`\")\n","\n","if maxlen > len(encoded_words[0]):\n","  print(f\"Notice the first 10 tokens observed above appear after the padded token {PAD_TOKEN_ID} \\n\")\n","  print(\"Padded tokens of first story:\\n\",padded_sequences[0])"],"metadata":{"id":"m-AjAUdVvDjm","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832979,"user_tz":-60,"elapsed":30,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"f3385b1b-26c3-4d2c-8a44-15f839062cf7"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["\u001b[33mWarning: The longest story has 320 words, but `maxlen` is set to 312. As a result, stories longer than `maxlen` will be truncated.\u001b[0m\n","New length of first story: 312 \n","\n","Padding makes the length of all sequences the same as the specified `maxlen`\n","Notice the first 10 tokens observed above appear after the padded token 0 \n","\n","Padded tokens of first story:\n"," [   0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0 4124 1941 4691 4456 5199 1169 4636  712 4947 5396 4170 1293\n"," 4155 4149 5257  246 4353 2086 1728 3794 4691 4515  969  906 4872 1976\n"," 1200  919 4233 3320 2121 1884 4240 4353 2653  919 5099  380 4872  919\n","  763 1728 5440 1728 4353 4962 5309 3242 5121  969 4353  900 4232 2207\n","  969  919 1003    1 5388 3942  855 4464  411  447    1  450 2118 4353\n","  587  719 3563   29 4256 2442 1728  762 1449 2777 2775 4872  437  377\n","  445 4256 4641    1 3320 3216  847 1190 4691  755  969 1694 1728  919\n"," 3312 2184 2629 4353 3888 1373  969  600    1 1453 4626 5152 4691 3683\n"," 2774 4170 4691 5192 4872 4691 3563  676 4402 1200  919  590 3157 4240\n"," 4872 2782 4353 2610]\n"]}]},{"cell_type":"code","source":["print(\"A different story looks like this after padding \\n\",padded_sequences[-1])"],"metadata":{"id":"cgIcROI8FurO","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832983,"user_tz":-60,"elapsed":4,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"4331cdaf-13be-4616-82fd-6d296cf19252"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["A different story looks like this after padding \n"," [   0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0    0    0    0    0    0    0    0\n","    0    0    0    0    0    0    0 2039 4691  409  969  709 3814 1405\n"," 1093 4918 4353  132 5129 4577  450 5059 4691 3022  969 5361 4380 4509\n"," 4872 3158 4636 2750 4872 1891 5199 4462 3242 4125 4872 2526  919 5282\n"," 4240 4691 1316 5098]\n"]}]},{"cell_type":"code","source":["padded_sequences.shape"],"metadata":{"id":"NAZzTjHmwaO_","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421832990,"user_tz":-60,"elapsed":7,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"411ffa41-5a5d-4d09-c4d6-bb17fb6f816a"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["(239, 312)"]},"metadata":{},"execution_count":188}]},{"cell_type":"markdown","source":["**Step 7: Prepare Input and Output**\n","\n"," What are the inputs and outputs of the model and how should we feed it into the model?"],"metadata":{"id":"GUCkMd-I3Mbb"}},{"cell_type":"markdown","source":["Our model works autoregressively, meaning it generates tokens one by one, using previous tokens as context. This means:\n","\n","The input sequence should contain tokens up to a certain point. The target sequence should be the same sequence shifted left by one token (i.e., the next word to predict)."],"metadata":{"id":"TITOopc9sBRB"}},{"cell_type":"code","source":["# Prepare input and output  to the transformer model:\n","input = padded_sequences[:, :-1] # All words except the last one\n","output= padded_sequences[:, 1:]  # All words except the first one"],"metadata":{"id":"JLmeuOr2obyr"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["Let's print out the first input and output sentence"],"metadata":{"id":"Xc2IrUF-ovZN"}},{"cell_type":"code","source":["print(input[0, -10:])\n","print(output[0, -10:])"],"metadata":{"id":"QCAQlnbrowAz","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421833020,"user_tz":-60,"elapsed":4,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"21d53947-a8de-4f85-e976-6b4d05674359"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["[ 676 4402 1200  919  590 3157 4240 4872 2782 4353]\n","[4402 1200  919  590 3157 4240 4872 2782 4353 2610]\n"]}]},{"cell_type":"markdown","source":["Notice how input and outputs are shifted by 1?"],"metadata":{"id":"CV3Eqh3X77P_"}},{"cell_type":"markdown","source":["Let's understand the shape of input and output.\n"],"metadata":{"id":"MstHG3IrV_Pc"}},{"cell_type":"code","source":["output.shape"],"metadata":{"id":"Xl5qOyNUpdhF","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421833021,"user_tz":-60,"elapsed":1,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"8945c867-d7c1-4249-bc2b-d1e383e8271b"},"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/plain":["(239, 311)"]},"metadata":{},"execution_count":191}]},{"cell_type":"markdown","source":["This shape signifies the number of stories you selected above and the max length enforced through padding."],"metadata":{"id":"NGWjVLm6WCHi"}},{"cell_type":"markdown","source":["**Step 8: Batching**"],"metadata":{"id":"bZzCG5n9ncMZ"}},{"cell_type":"markdown","source":[" We have a dataset of encoded and padded stories, and we need to feed them into a model for training or inference. How should we do this?  \n","\n","**Option 1: Feed Sequences One by One (Sequential Processing)**  \n","At first glance, we could feed each encoded sequence into the model one at a time. This means that for every sequence the model makes a prediction, gets some feedback on whether the prediction is correct or not (remember the concept of loss?), and uses that feedback to correct its understanding.  \n","\n","*Why is this inefficient?*\n","- Neural networks perform matrix operations, which are highly optimized for parallel computation on GPUs/TPUs. If we process one sequence at a time, we waste the parallel processing capability of our hardware.  \n","- Training would take an extremely long time because the model updates its beliefs after each single sequence instead of aggregating information across multiple sequence.    \n","\n","**Option 2: Feed All Sequences Together (One Giant Input)**  \n","The opposite approach would be to feed all encoded sequence at once into the model. Instead of processing one sequence at a time, we could take the entire dataset and process it in a single pass.  \n","\n","*Why is this impractical?*\n","\n","- Memory constraints: Even with high-end GPUs, loading an entire dataset into memory at once is impossible. Imagine trying to load the entire Wikipedia corpus onto your computer memory... impossible! If the dataset is too large, it may not fit in memory, leading to out-of-memory errors and inefficient execution.\n","\n","**Option 3: Process Sequences in Batches**\n","\n","A balanced approach is to group a set of tokenized sequence together into small chunks, called batches, and process them together. Instead of feeding one sentence at a time or the entire dataset at once, we split the dataset into *mini-batches* of a fixed size (e.g., 32, 64, or 128 sequence per batch).  \n","\n","\n","Let's assume we have 100 tokenized sequences, and we choose a batch size of 32.\n","We will have `(100 / 32) + 1 ` i.e 4 batches.\n","Where 3 batches will have 32 sequences and the last batch will contain 4 sequences."],"metadata":{"id":"nsD8-uAC3WXz"}},{"cell_type":"markdown","source":["![image.png]()"],"metadata":{"id":"oW7COhHxyipV"}},{"cell_type":"code","source":["# # Create TensorFlow dataset to prepare sequence for batching\n","dataset = tf.data.Dataset.from_tensor_slices((input, output))\n","\n","# batch_size = 8 # the batch size\n","batch_size = 8  #@param {type:\"number\"}\n","\n","count=0\n","dataset = dataset.batch(batch_size)\n","for batch in dataset:\n","  count+=1\n","print(count)"],"metadata":{"id":"CYdVzdHvM6w-","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421833022,"user_tz":-60,"elapsed":1,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"d4e10fed-dd2b-4974-ab32-0c29b4439ac8"},"execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["30\n"]}]},{"cell_type":"markdown","metadata":{"id":"6nwdEqyCfA_t"},"source":["**Step 9: Load and Train a Small Language Model (SLM)**"]},{"cell_type":"markdown","metadata":{"id":"zW6oUGm8sY1C"},"source":["We will now load a small Transformer model."]},{"cell_type":"markdown","metadata":{"id":"f4dsdg-DtW64"},"source":["We call it small because of the size of the model is much smaller (below 10 million parameters) and the architecture is simpler when compared to state-of-the-art Transformer language models like Google Gemini which are called large language models (LLMs) which have billions of parameters.\n","\n","**Parameters**\n","\n","In machine learning, parameters refer to the internal variables that a model learns during training. These parameters determine how the model processes and makes predictions on new data. In the case of a language model, parameters are the weights that the model adjusts to understand and generate language."]},{"cell_type":"markdown","metadata":{"id":"ex01X6Ru0E16"},"source":["The create_model function used below constructs a Transformer model, a potent neural network architecture widely employed in natural language processing, which is the central focus of this course."]},{"cell_type":"code","source":["# @title keeping the code visible for now but will be hidden away\n","\n","import tensorflow as tf\n","import keras\n","from keras import ops, layers\n","import keras_hub\n","\n","# Code adapted\n","# from https://keras.io/examples/generative/text_generation_with_miniature_gpt/\n","# Style guide:\n","# https://google.github.io/styleguide/pyguide#383-functions-and-methods\n","\n","import os\n","os.environ[\"KERAS_BACKEND\"] = \"jax\"\n","tf.random.set_seed(812)  # For TensorFlow operations\n","keras.utils.set_random_seed(812)  # For Keras layers\n","\n","def create_model(vocab_size: int,\n","                 maxlen: int,\n","                 d_model: int = 256,\n","                 ff_dim: int = 256,\n","                 num_heads: int = 1,\n","                 n_blocks: int = 1,\n","                 optimizer: str = \"adamw\",\n","                 learning_rate: float = 1e-4,\n","                 dropout_rate: float = 0.0,\n","                 activation: str = \"relu\",\n","                 pad_token_id: int = 0) -> keras.Model:\n","    \"\"\"Creates a Transformer-based model for sequence processing tasks.\n","\n","    Example:\n","        model = create_model(vocab_size=5000, maxlen=100,\n","                            embed_dim=256, ff_dim=512,\n","                            num_heads=8, n_blocks=2)\n","        model.summary()\n","\n","    Notes:\n","        - The model uses causal (masked) attention to ensure that each token\n","          only attends to previous tokens and not future tokens.\n","        - The final dense layer produces a logit over the vocabulary for\n","          each token in the sequence.\n","        - The loss function is `CustomMaskPadLoss`, which ignores padding\n","          tokens in the loss computation.\n","\n","    Args:\n","        vocab_size: The size of the vocabulary, i.e.,\n","                    the number of unique tokens.\n","        maxlen: The maximum length of the input sequences.\n","        d_model: The dimensionality of the embedding space.\n","                   Default is 256.\n","        ff_dim: The number of units in the feed-forward network\n","                of each Transformer block. Default is 256.\n","        num_heads: The number of attention heads in the multi-head\n","                   attention mechanism. Default is 1.\n","        n_blocks: The number of Transformer blocks to stack in the model.\n","                  Default is 1.\n","        optimizer: The optimizer to use for training, either vanilla 'adam',\n","                   adam with weight decay ('adamw') or 'sgd'.\n","                   Default is 'adamw'.\n","        learning_rate: The learning rate for the optimizer. Default is 1e-4.\n","        dropout_rate: The dropout rate to prevent overfitting.\n","                       Default is 0.0 (no dropout).\n","        activation: The activation function to use in the feed-forward network\n","                    of each Transformer block. Default is 'relu'.\n","        pad_token_id: The ID used to represent padding tokens in the sequence.\n","                      This is used to mask padded tokens in the loss\n","                      calculation.Default is 0.\n","\n","    Returns:\n","        keras.Model: The compiled Keras model. The model has two outputs:\n","                     - The output is the probability of next token prediction.\n","\n","\n","    Raises:\n","        NotImplementedError: If an unsupported optimizer is specified.\n","    \"\"\"\n","    # Create input layer\n","    inputs = layers.Input(shape=(maxlen,), dtype=\"int32\")\n","\n","    # Embedding layer that combines token and positional embeddings\n","    embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, d_model)\n","    x = embedding_layer(inputs)\n","\n","    # Apply a stack of Transformer blocks\n","    for _ in range(n_blocks):\n","        transformer_block = TransformerBlock(d_model,\n","                                            num_heads,\n","                                            ff_dim,\n","                                            dropout_rate=dropout_rate,\n","                                            activation=activation)\n","        x = transformer_block(x)\n","\n","    # Apply dense layer, it returns raw logit of next token prediction\n","    outputs = layers.Dense(vocab_size)(x)\n","\n","    # Apply softmax to turn raw logit to probability distribution\n","    outputs = layers.Softmax()(outputs)\n","\n","    # Build the model\n","    model = keras.Model(inputs=inputs, outputs=outputs)\n","\n","    # Set up optimizer based on input string\n","    optimizer_instance = get_optimizer(optimizer, learning_rate)\n","\n","    # Define the loss function and compile the model\n","    loss_fn = CustomMaskPadLoss(pad_token_id=pad_token_id)\n","    model.compile(optimizer=optimizer_instance, loss=loss_fn)\n","\n","    # Final output layer returns the probability of next token prediction.\n","    return model\n","\n","def get_optimizer(optimizer_name: str,\n","                  learning_rate: float) -> keras.optimizers.Optimizer:\n","    \"\"\"Helper function to get the appropriate optimizer instance.\n","\n","    Args:\n","        optimizer_name (str): The optimizer type ('adam' or 'sgd').\n","        learning_rate (float): The learning rate for the optimizer.\n","\n","    Returns:\n","        keras.optimizers.Optimizer: The corresponding optimizer instance.\n","\n","    Raises:\n","        NotImplementedError: If an unsupported optimizer is specified.\n","    \"\"\"\n","    if optimizer_name.lower() == \"sgd\":\n","        return keras.optimizers.SGD(learning_rate=learning_rate)\n","    elif optimizer_name.lower() == \"adam\":\n","        return keras.optimizers.Adam(learning_rate=learning_rate,\n","                                      weight_decay=None,\n","                                      gradient_accumulation_steps=None\n","                                      )\n","    elif optimizer_name.lower() == \"adamw\":\n","        return keras.optimizers.AdamW(learning_rate=learning_rate,\n","                                      weight_decay=0.005,\n","                                      gradient_accumulation_steps=None\n","                                      )\n","    else:\n","        raise NotImplementedError(f\"Optimizer {optimizer_name}\"\n","                                  \" is not implemented.\")\n","\n","# print(get_optimizer('aa', 1e-4))\n","@keras.saving.register_keras_serializable()\n","class CustomMaskPadLoss(keras.losses.Loss):\n","    \"\"\"Custom loss function for masked padding in sequence-based tasks.\n","\n","    This loss function computes the SparseCategoricalCrossentropy\n","    loss while ignoring the padding tokens (specified by `pad_token_id`).\n","    The padding tokens are not included in the loss calculation,\n","    allowing the model to focus on meaningful tokens during training.\n","\n","    Attributes:\n","        name: The name of the loss function, used by Keras.\n","              Defaults to \"custom_mask_pad_loss\".\n","        pad_token_id: The ID of the padding token. If provided,\n","                      padding tokens will be ignored during loss calculation.\n","                      If None, no padding is masked.\n","        kwargs: Additional keyword arguments.\n","    \"\"\"\n","    def __init__(self,\n","                 name: str = \"custom_mask_pad_loss\",\n","                 pad_token_id: int | None = None,\n","                 **kwargs: dict):\n","        super().__init__(name=name, **kwargs)\n","        self.pad_token_id = pad_token_id\n","\n","    def call(self,\n","             y_true: tf.Tensor,\n","             y_pred: tf.Tensor) -> tf.Tensor:\n","        \"\"\"Computes the custom loss, optionally masking the padding\n","           tokens and normalizing the loss by the number of non-masked tokens.\n","           The loss is computed using the SparseCategoricalCrossentropy\n","           loss function.\n","        \"\"\"\n","        loss_fn =  tf.keras.losses.SparseCategoricalCrossentropy(\n","                        # The model's output is a probability distribution. If\n","                        # it is raw logit, this should be True\n","                        from_logits=False,\n","\n","                        # Average the loss across the batch size\n","                        reduction=\"sum_over_batch_size\"\n","                    )\n","\n","        if self.pad_token_id is not None:\n","            # Create a boolean mask: True for non-padding tokens.\n","            # Shape: (batch_size, sequence_length)\n","            mask = tf.not_equal(y_true, self.pad_token_id)\n","\n","            # Use tf.boolean_mask to filter out padded tokens.\n","            # y_true_filtered will be a 1D tensor containing only\n","            # the valid token labels.\n","            y_true_filtered = tf.boolean_mask(y_true, mask)\n","\n","            # y_pred_filtered will be a 2D tensor containing only\n","            # the predictions for valid tokens.\n","            y_pred_filtered = tf.boolean_mask(y_pred, mask)\n","\n","            loss = loss_fn(y_true_filtered, y_pred_filtered)\n","        else:\n","            loss = loss_fn(y_true, y_pred)\n","        return loss\n","\n","# so custom class can be saved and loaded correctly.\n","@keras.saving.register_keras_serializable()\n","class FeedForwardNetwork(tf.keras.layers.Layer):\n","  \"\"\"Feed Forward Network Layer.\n","\n","  This layer implements a two-layer feedforward network with a residual\n","  connection and layer normalization. It's a common component in\n","  Transformer architectures, used to introduce non-linearity and improve\n","  the model's ability to capture complex relationships.\n","\n","  Args:\n","      d_model: The dimensionality of the model and the input/output tensors.\n","      ff_dim: The dimensionality of the hidden layer in the feedforward\n","              network (often larger than d_model).\n","      dropout_rate: The dropout rate applied to the output of the feedforward\n","                    network. Defaults to 0.0.\n","      activation: The activation function used in the first dense layer.\n","                  Defaults to \"relu\".\n","      **kwargs: Additional keyword arguments passed to the base Layer.\n","\n","  Call Arguments:\n","      x: Input tensor of shape (batch_size, sequence_length, d_model).\n","\n","  Returns:\n","      tf.Tensor: Output tensor of shape (batch_size, sequence_length, d_model)\n","                 after applying the feedforward network and residual connection.\n","  \"\"\"\n","\n","  def __init__(self,\n","               d_model: int,\n","               ff_dim: int,\n","               dropout_rate: float = 0.0,\n","               activation: str = \"relu\",\n","               **kwargs: dict):\n","      super(FeedForwardNetwork, self).__init__(**kwargs)\n","      # Define a two-layer feedforward network\n","      self.ffn = tf.keras.Sequential([\n","          # Expand dimension\n","          tf.keras.layers.Dense(ff_dim, activation=activation),\n","          # Project back to d_model\n","          tf.keras.layers.Dense(d_model)\n","      ])\n","      self.dropout = tf.keras.layers.Dropout(dropout_rate)\n","      # Epsilon is a small constant added to the denominator for numerical\n","      # stability in layer normalization. Default is 1e-6.\n","      self.layernorm = tf.keras.layers.LayerNormalization(epsilon=1e-6)\n","\n","  def call(self, x: tf.Tensor) -> tf.Tensor:\n","      \"\"\"Applies the feedforward network to the input tensor.\n","\n","      Args:\n","          x: Input tensor of shape (batch_size, sequence_length, d_model).\n","\n","      Returns:\n","          tf.Tensor: Output tensor of shape (batch_size, sequence_length,\n","                                             d_model).\n","      \"\"\"\n","      ffn_output = self.ffn(x)\n","      ffn_output = self.dropout(ffn_output)\n","      # Add residual connection followed by layer normalization.\n","      output = self.layernorm(x + ffn_output)\n","      return output\n","\n","\n","# so custom class can be saved and loaded correctly.\n","@keras.saving.register_keras_serializable()\n","class MultiHeadSelfAttention(tf.keras.layers.Layer):\n","    \"\"\"Multi-Head Self-Attention Layer.\n","\n","    This layer implements multi-head self-attention, a key component in\n","    Transformer architectures.\n","    It computes attention weights for each head and applies them to the\n","    input to generate a contextually enriched representation.\n","\n","    Args:\n","        d_model: The dimensionality of the model and the input/output tensors.\n","        num_heads: The number of attention heads.\n","        dropout_rate: The dropout rate applied to the attention output.\n","                      Defaults to 0.0.\n","        **kwargs: Additional keyword arguments passed to the base Layer.\n","\n","    Call Arguments:\n","        x: Input tensor of shape (batch_size, sequence_length, d_model).\n","\n","    Returns:\n","        tf.Tensor: Output tensor of shape (batch_size, sequence_length, d_model)\n","                    with self-attention applied.\n","    \"\"\"\n","\n","    def __init__(self,\n","               d_model: int,\n","               num_heads: int,\n","               dropout_rate: float = 0.0,\n","               **kwargs: dict):\n","        super(MultiHeadSelfAttention, self).__init__(**kwargs)\n","        # Multi-head self-attention layer\n","\n","        self.mha = tf.keras.layers.MultiHeadAttention(num_heads=num_heads,\n","                                                      key_dim=d_model)\n","        self.dropout = tf.keras.layers.Dropout(dropout_rate)\n","        # Epsilon is a small constant added to the denominator for numerical\n","        # stability in layer normalization. Default is 1e-6.\n","        self.layernorm = tf.keras.layers.LayerNormalization(epsilon=1e-6)\n","\n","    def call(self, x: tf.Tensor) -> tf.Tensor:\n","      \"\"\"Applies multi-head self-attention to the input tensor.\n","\n","      Args:\n","          x: Input tensor of shape (batch_size, sequence_length, d_model).\n","\n","      Returns:\n","          tf.Tensor: Output tensor of shape (batch_size, sequence_length,\n","                                            d_model).\n","      \"\"\"\n","\n","      # Apply self-attention. The mask is typically a look-ahead mask\n","      attn_output = self.mha(query=x, value=x, key=x,  use_causal_mask=True)\n","      attn_output = self.dropout(attn_output)\n","      # Add residual connection followed by layer normalization.\n","      output = self.layernorm(x + attn_output)\n","      return output\n","\n","# so custom class can be saved and loaded correctly.\n","@keras.saving.register_keras_serializable()\n","class TransformerBlock(layers.Layer):\n","  \"\"\"A single Transformer block.\n","\n","    The Transformer block is a fundamental component of the Transformer\n","    architecture, which is commonly used for sequence-based tasks. It consists\n","    of a MultiHeadAttention layer followed by a feed-forward network,\n","    with layer normalization and dropout applied at each step.\n","\n","    Example:\n","        transformer_block = TransformerBlock(d_model=256, num_heads=8,\n","                                             ff_dim=1024)\n","        output = transformer_block(inputs)\n","\n","    Attributes:\n","        d_model: The dimensionality of the input embedding (also the output\n","                 size of the attention layer).\n","        num_heads: The number of attention heads in the multi-head\n","                   attention mechanism.\n","        ff_dim: The number of units in the feed-forward network.\n","        dropout_rate: Dropout rate, between 0 and 1. Default is 0.0\n","        activation: The activation function to use in the feed-forward network.\n","                     Default is \"relu\".\n","        seed: Random seed for dropout and attention layers to ensure\n","              reproducibility. Default is 42.\n","        kwargs: Additional keyword arguments to pass to the parent `Layer` class.\n","\n","    Returns:\n","        tf.Tensor: The output of the Transformer block after applying the\n","                   multi-head attention, feed-forward network,\n","                   layer normalization, and residual connections.\n","\n","    \"\"\"\n","  def __init__(self,\n","               d_model: int,\n","               num_heads: int,\n","               ff_dim: int,\n","               dropout_rate: float = 0.0,\n","               activation: str = \"relu\",\n","               **kwargs: dict):\n","    super().__init__(**kwargs)\n","\n","    self.self_attention = MultiHeadSelfAttention(d_model, num_heads, dropout_rate)\n","    self.feed_forward = FeedForwardNetwork(d_model, ff_dim, dropout_rate,activation)\n","\n","  def call(self, inputs: tf.Tensor) -> tf.Tensor:\n","    \"\"\"Applies a single Transformer block to the input tensor.\n","\n","    Notes:\n","        - The Transformer block follows the architecture with residual\n","          connections and layer normalization.\n","\n","    Args:\n","        inputs: The input tensor of shape (batch_size, seq_len, embed_dim).\n","\n","    Returns:\n","        tf.Tensor: The output tensor of shape (batch_size, seq_len, embed_dim)\n","                    after applying the Transformer block.\n","    \"\"\"\n","    # First block: masked self-attention\n","    attn_output = self.self_attention(inputs)\n","\n","    # Second block: feedforward network applied on attention output\n","    ffn_output = self.feed_forward(attn_output)\n","\n","    return ffn_output\n","\n","\n","# so custom class can be saved and loaded correctly.\n","@keras.saving.register_keras_serializable()\n","class TokenAndPositionEmbedding(layers.Layer):\n","    \"\"\"Combines token embeddings with positional embeddings.\n","\n","    This layer creates combined token and positional embeddings\n","    for input sequences. It supports different types of positional\n","    embeddings, including 'simple' learned embeddings, 'sinusoidal'\n","    positional encodings.\n","    The `mask_zero=True` setting in the token embeddings allows for\n","    automatic masking of padded tokens.\n","\n","    Attributes:\n","        maxlen: The maximum expected sequence length. This determines the\n","                    range of positional embeddings.\n","        vocab_size: The size of the vocabulary. This determines the size\n","                        of the token embedding matrix.\n","        embed_dim: The dimensionality of the token and positional embeddings.\n","        positional_embedding_type: The type of positional embedding\n","                                                to use.  Can be 'simple',\n","                                                'sinusoidal'.\n","                                                Defaults to 'sinusoidal'.\n","        kwargs: Additional keyword arguments passed to the base\n","                `keras.layers.Layer` constructor.\n","    \"\"\"\n","\n","    def __init__(self, maxlen: int,\n","                vocab_size: int,\n","                embed_dim: int,\n","                positional_embedding_type: str = \"sinusoidal\",\n","                **kwargs: dict):\n","        super().__init__(**kwargs)\n","\n","        # Set mask_zero=True so that Keras generates a mask for padded tokens.\n","        self.positional_embedding_type=positional_embedding_type\n","        self.token_emb = layers.Embedding(input_dim=vocab_size,\n","                                          output_dim=embed_dim,\n","                                          mask_zero=True)\n","\n","        if self.positional_embedding_type == \"simple\":\n","            self.pos_emb = layers.Embedding(input_dim=maxlen,\n","                                            output_dim=embed_dim)\n","\n","        elif self.positional_embedding_type == \"sinusoidal\":\n","            self.pos_emb= keras_hub.layers.SinePositionEncoding()\n","\n","        else:\n","            raise NotImplementedError(\"Positional embedding type\"\n","                                      f\" {self.positional_embedding_type}\"\n","                                      f\" not implemented.\")\n","\n","\n","    def call(self, x: tf.Tensor) -> tf.Tensor:\n","        maxlen = ops.shape(x)[-1]\n","        # shape: (batch_size, sequence_length, embed_dim)\n","        token_embeddings = self.token_emb(x)\n","\n","        if self.positional_embedding_type == \"simple\":\n","            positions = ops.arange(0, maxlen, 1)\n","            # shape: (sequence_length, embed_dim)\n","            position_embeddings = self.pos_emb(positions)\n","\n","        else:\n","            position_embeddings = self.pos_emb(token_embeddings)\n","\n","        return token_embeddings + position_embeddings\n","\n","\n","#### To get rid of UserWarning: Layer 'sine_position_encoding_4' (of type SinePositionEncoding) was passed an\n","## input with a mask attached to it. However, this layer does not support masking and\n","#### will therefore destroy the mask information. Downstream layers will not see the mask.\n","\n","## so custom class can be saved and loaded correctly.\n","# @keras.saving.register_keras_serializable()\n","# class SinePositionEncodingMaskSupport(keras_hub.layers.SinePositionEncoding):\n","#     \"\"\"\n","#       A custom layer that extends the standard SinePositionEncoding layer\n","#       and adds support for masking. This ensures that any padding or masked\n","#       inputs are properly handled during the position encoding process.\n","\n","#       Inherits from the base `SinePositionEncoding` and sets\n","#       the `self.supports_masking=true` to propagate the mask information,\n","#       allowing downstream layers to respect the mask.\n","\n","#       Attributes:\n","#         kwargs: Any additional keyword arguments that may be passed\n","#                 during initialization.\n","#       \"\"\"\n","#     def __init__(self, **kwargs: dict):\n","#         super(SinePositionEncodingMaskSupport, self).__init__(**kwargs)\n","#         self.supports_masking = True"],"metadata":{"id":"4bUco9P0WFOL"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["# @title keeping the code visible for now but will be hidden away\n","import numpy as np\n","import keras\n","from typing import Any\n","\n","class TextGenerator(keras.callbacks.Callback):\n","    \"\"\"\n","    A callback to generate text from a trained model.\n","    1. Feed a starting prompt to the model.\n","    2. Predict probabilities for the next token.\n","    3. Sample the next token and add it to the input for the next prediction.\n","\n","    Attributes:\n","        max_tokens: Number of tokens to be generated after the prompt.\n","        start_tokens: Token indices for the starting prompt.\n","        tokenizer: Tokenizer instance to convert token indices back to words.\n","        pad_token_id: Token ID for padding, default is 0.\n","        print_every: Print the generated text every this many epochs.\n","                     Default is 1.\n","        kwargs: Any additional keyword arguments.\n","    \"\"\"\n","\n","    def __init__(self, max_tokens: int,\n","                 start_tokens: list[int],\n","                 tokenizer: Any,\n","                 pad_token_id: int = 0,\n","                 print_every: int = 1,\n","                 **kwargs: dict):\n","        \"\"\"\n","        Initializes the text generator callback.\n","\n","        Args:\n","            max_tokens: Number of tokens to generate.\n","            start_tokens: Token indices for the initial prompt.\n","            tokenizer: The tokenizer used to decode generated token indices.\n","            pad_token_id: The padding token ID (default is 0).\n","            print_every: Print the generated text every `print_every` epochs.\n","                         Default is 1.\n","        \"\"\"\n","        super().__init__(**kwargs)\n","        self.max_tokens = max_tokens\n","        self.start_tokens = start_tokens\n","        self.tokenizer = tokenizer\n","        self.print_every = print_every\n","        self.pad_token_id = pad_token_id  # ID for padding token\n","\n","    def greedy_decoding(self, probs: np.ndarray) -> int:\n","        \"\"\"\n","        Select the token index with the highest probability.\n","\n","        Args:\n","            probs: The probability distribution of next token prediction.\n","\n","        Returns:\n","            int: The index of the predicted token with the highest probability.\n","        \"\"\"\n","        predicted_index = np.argmax(probs)\n","        return predicted_index\n","\n","    def on_epoch_end(self, epoch: int, logs: dict | None = None) -> None:\n","        \"\"\"\n","        Generate and print text after each epoch based on the starting tokens.\n","\n","        Args:\n","            epoch: The current epoch number.\n","            logs: Logs from the training process.\n","        \"\"\"\n","        maxlen = self.model.layers[0].output.shape[1]\n","        # Make a copy of the start tokens\n","        start_tokens = list(self.start_tokens)\n","        if (epoch + 1) % self.print_every != 0:\n","            return\n","\n","        num_tokens_generated = 0\n","        tokens_generated: list[int] = []\n","\n","        while num_tokens_generated < self.max_tokens:\n","            pad_len = maxlen - len(start_tokens)\n","            sample_index = len(start_tokens) - 1\n","\n","            # Handle padding to ensure the sequence is of the correct length\n","            if pad_len < 0:\n","                x = start_tokens[:maxlen]\n","                sample_index = maxlen - 1\n","            elif pad_len > 0:\n","                x = start_tokens + [self.pad_token_id] * pad_len\n","            else:\n","                x = start_tokens\n","\n","            x = np.array([x])\n","            y = self.model.predict(x, verbose=0)\n","            sample_token = self.greedy_decoding(y[0][sample_index])\n","\n","            tokens_generated.append(sample_token)\n","            start_tokens.append(sample_token)\n","            num_tokens_generated = len(tokens_generated)\n","\n","        # Combine the starting tokens with the generated tokens\n","        output_tokens = self.start_tokens + tokens_generated\n","        output_tokens = list(map(int, output_tokens))\n","\n","        # Decode and print the generated text\n","        txt = self.tokenizer.decode(output_tokens)\n","        print(f\"Generated text:\\n{txt}\\n\")"],"metadata":{"id":"mxAjPLp5A9wq"},"execution_count":null,"outputs":[]},{"cell_type":"code","execution_count":null,"metadata":{"id":"iNGqssFqlzqE","colab":{"base_uri":"https://localhost:8080/","height":437},"executionInfo":{"status":"ok","timestamp":1743421833579,"user_tz":-60,"elapsed":441,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"4fad0c5c-e396-413d-a359-e68a978662fb"},"outputs":[{"output_type":"stream","name":"stderr","text":["/usr/local/lib/python3.11/dist-packages/keras/src/layers/layer.py:938: UserWarning:\n","\n","Layer 'sine_position_encoding_1' (of type SinePositionEncoding) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask.\n","\n"]},{"output_type":"display_data","data":{"text/plain":["\u001b[1mModel: \"functional_3\"\u001b[0m\n"],"text/html":["<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"functional_3\"</span>\n","</pre>\n"]},"metadata":{}},{"output_type":"display_data","data":{"text/plain":["┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n","┃\u001b[1m \u001b[0m\u001b[1mLayer (type)                        \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape               \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m        Param #\u001b[0m\u001b[1m \u001b[0m┃\n","┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n","│ input_layer_2 (\u001b[38;5;33mInputLayer\u001b[0m)           │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m311\u001b[0m)                 │               \u001b[38;5;34m0\u001b[0m │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ token_and_position_embedding_1       │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m311\u001b[0m, \u001b[38;5;34m256\u001b[0m)            │       \u001b[38;5;34m1,395,712\u001b[0m │\n","│ (\u001b[38;5;33mTokenAndPositionEmbedding\u001b[0m)          │                             │                 │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ transformer_block_1                  │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m311\u001b[0m, \u001b[38;5;34m256\u001b[0m)            │         \u001b[38;5;34m395,776\u001b[0m │\n","│ (\u001b[38;5;33mTransformerBlock\u001b[0m)                   │                             │                 │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ dense_5 (\u001b[38;5;33mDense\u001b[0m)                      │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m311\u001b[0m, \u001b[38;5;34m5452\u001b[0m)           │       \u001b[38;5;34m1,401,164\u001b[0m │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ softmax_3 (\u001b[38;5;33mSoftmax\u001b[0m)                  │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m311\u001b[0m, \u001b[38;5;34m5452\u001b[0m)           │               \u001b[38;5;34m0\u001b[0m │\n","└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘\n"],"text/html":["<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n","┃<span style=\"font-weight: bold\"> Layer (type)                         </span>┃<span style=\"font-weight: bold\"> Output Shape                </span>┃<span style=\"font-weight: bold\">         Param # </span>┃\n","┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n","│ input_layer_2 (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">InputLayer</span>)           │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">311</span>)                 │               <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ token_and_position_embedding_1       │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">311</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">256</span>)            │       <span style=\"color: #00af00; text-decoration-color: #00af00\">1,395,712</span> │\n","│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">TokenAndPositionEmbedding</span>)          │                             │                 │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ transformer_block_1                  │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">311</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">256</span>)            │         <span style=\"color: #00af00; text-decoration-color: #00af00\">395,776</span> │\n","│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">TransformerBlock</span>)                   │                             │                 │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ dense_5 (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Dense</span>)                      │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">311</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">5452</span>)           │       <span style=\"color: #00af00; text-decoration-color: #00af00\">1,401,164</span> │\n","├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤\n","│ softmax_3 (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Softmax</span>)                  │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">311</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">5452</span>)           │               <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n","└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘\n","</pre>\n"]},"metadata":{}},{"output_type":"display_data","data":{"text/plain":["\u001b[1m Total params: \u001b[0m\u001b[38;5;34m3,192,652\u001b[0m (12.18 MB)\n"],"text/html":["<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">3,192,652</span> (12.18 MB)\n","</pre>\n"]},"metadata":{}},{"output_type":"display_data","data":{"text/plain":["\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m3,192,652\u001b[0m (12.18 MB)\n"],"text/html":["<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">3,192,652</span> (12.18 MB)\n","</pre>\n"]},"metadata":{}},{"output_type":"display_data","data":{"text/plain":["\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n"],"text/html":["<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> (0.00 B)\n","</pre>\n"]},"metadata":{}},{"output_type":"stream","name":"stdout","text":["None\n"]}],"source":["model = create_model(maxlen=maxlen-1, vocab_size=vocab_size)\n","print(model.summary())"]},{"cell_type":"markdown","metadata":{"id":"tVA4N5N9nAwB"},"source":["We proceed with training the model. For monitoring progress, we define a callback function that is used to regularly print the generated words during training. This function allows us to track the learning progress of the language model.  We can specify the number of words to print and the initial prompt to guide the model's generation."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"1GhWqSRVm_ql","collapsed":true},"outputs":[],"source":["prompt = \"Jide was hungry so she went looking for\"\n","\n","# UNKNOWN_TOKEN_ID = encoding(UNKNOWN_TOKEN) # we set to unknown if we encounter word we have not seen before\n","# split starting prompt\n","# start_words = [word_to_index.get(t, UNKNOWN_TOKEN_ID) for t in prompt.split()]\n","# text_gen_callback = TextGenerator(max_tokens=10, start_tokens=start_words, tokenizer=SimpleWordTokenizer(vocab))\n","\n","start_words = tokenizer.encode(prompt)\n","text_gen_callback = TextGenerator(max_tokens=10, start_tokens=start_words, tokenizer=tokenizer)"]},{"cell_type":"markdown","metadata":{"id":"lTIY3uK08yUn"},"source":["To train the model faster, we'll specify a few parameters. We will go into these parameters and under-the-hood mechanics of Transformer models in much more detail in the later modules; for now, you should know that:\n","\n","* `epochs`: This is the number of times the model goes through entire training dataset. An epoch consists of several iterations because the model goes through training data in batches. In each iteration the model calculates loss and uses it to adjust its predictions. The batch size determines how many samples are included in each batch. Therefore, the total number of iterations in an epoch is equal to the total number of training samples divided by the batch size. More epochs means more times a model gets to adjust its understanding of the task and data. However, more epochs also means more time taken to train the model.\n","\n","\n","* `verbose=2`: helps to print one line per epoch so we see how the loss is decreasing and generated texts improving.\n"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"pJ5CvNL43woO","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421966587,"user_tz":-60,"elapsed":132998,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"797024a9-27e3-4e7f-accd-b4d016c1f6d0"},"outputs":[{"output_type":"stream","name":"stdout","text":["Epoch 1/5\n","Generated text:\n","Jide was hungry so she went looking for and and and and and a a life, the the\n","\n","21/21 - 38s - 2s/step - loss: 8.4616 - val_loss: 8.3150\n","Epoch 2/5\n","Generated text:\n","Jide was hungry so she went looking for and and and and and a a the the the\n","\n","21/21 - 29s - 1s/step - loss: 8.0371 - val_loss: 8.0307\n","Epoch 3/5\n","Generated text:\n","Jide was hungry so she went looking for and and and and and a a the the the\n","\n","21/21 - 19s - 924ms/step - loss: 7.6861 - val_loss: 7.7989\n","Epoch 4/5\n","Generated text:\n","Jide was hungry so she went looking for and and and and and and a the the the\n","\n","21/21 - 21s - 1s/step - loss: 7.3999 - val_loss: 7.6169\n","Epoch 5/5\n","Generated text:\n","Jide was hungry so she went looking for and and and and and and a the the the\n","\n","21/21 - 25s - 1s/step - loss: 7.1795 - val_loss: 7.4901\n","CPU times: user 2min 1s, sys: 16.8 s, total: 2min 17s\n","Wall time: 2min 12s\n"]}],"source":["# the batch size: If you're running into out of memory error, you should consider reducing the batch_size.\n","%%time\n","# batch_size = 8  #@param {type:\"number\"}\n","epochs = 5  #@param {type:\"number\"}\n","history = model.fit(x=input, y=output, validation_split=0.3, verbose=2, epochs=epochs, batch_size=batch_size, callbacks=[text_gen_callback])\n","# history = model.fit(x=dataset, verbose=2, epochs=epochs, callbacks=[text_gen_callback])"]},{"cell_type":"markdown","source":["**Are you running into `Out of Memory` error?**\n","\n","If you're getting an \"Out of Memory\" error, it means your system doesn't have enough memory to process the data. Here are some practical solutions:\n","\n","Consider trying the following:\n","\n","1. Reduce the number of stories you're training with\n","\n","    Use only a subset of the stories instead of the entire dataset. This reduces memory usage.\n","\n","2. Reduce the maxlen\n","\n","    Lower the number of words (tokens) processed at once. Shorter sequences need less memory. Consider truncating long sequences to a smaller length.\n","\n","3. Reduce the batch size\n","\n","    A smaller batch means less data is processed at once, reducing memory requirements."],"metadata":{"id":"qGmGKUsc8U66"}},{"cell_type":"markdown","metadata":{"id":"HHrgLwtmoY2w"},"source":["We observe that as the model trains, the loss is decreasing i.e getting smaller and smaller, and the generated texts looks better when you compare the first and last epoch."]},{"cell_type":"markdown","metadata":{"id":"HcgsPzROZIjc"},"source":["**[Think about the implication of having such as very low loss]**"]},{"cell_type":"markdown","metadata":{"id":"NMwbtlgiZWs8"},"source":["In the next couple of modules, we will dive deep into how to plot the loss function and evalaute the trained models."]},{"cell_type":"markdown","metadata":{"id":"zf1BR0cGoy0V"},"source":["Now that we have a trained model, let's prompt it like we did in the `prompting a transformer model` section."]},{"cell_type":"markdown","metadata":{"id":"DTbezoZr5VKc"},"source":["**Step 8: Prompting the trained model**"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"l05Xu4T2tyh6","cellView":"form"},"outputs":[],"source":["# @title keeping the code visible for now but will be hidden away\n","\n","import jax\n","import jax.numpy as jnp\n","import plotly.express as px\n","from typing import Union, Any\n","\n","def sampling(probs: np.ndarray) -> int:\n","    \"\"\"\n","    Sample a token index from the predicted next token probability.\n","\n","    Args:\n","        probs: The probability distribution of predicted next token.\n","\n","    Returns:\n","        int: The index of the sampled token.\n","    \"\"\"\n","    return np.random.choice(np.arange(len(probs)), p=probs)\n","\n","\n","def greedy_decoding(probs: np.ndarray) -> int:\n","    \"\"\"\n","    Select the token index from the predicted next token probability.\n","\n","    Args:\n","        probs: The probability distribution of predicted next token.\n","\n","    Returns:\n","        int: The index of the token with the highest probability.\n","    \"\"\"\n","    predicted_index = np.argmax(probs)\n","    return predicted_index\n","\n","def generate_text(start_prompt: str,\n","                  n_tokens: int,\n","                  model: keras.Model,\n","                  tokenizer: object,\n","                  pad_token_id: int = 0,\n","                  do_sample: bool = False) -> tuple[str, list[np.ndarray]]:\n","    \"\"\"\n","    Generate text based on a starting prompt using a trained model.\n","\n","    Args:\n","        start_prompt: The initial prompt to start the generation.\n","        n_tokens: The number of tokens to generate after the prompt.\n","        model: The trained model to use for text generation.\n","        tokenizer: The tokenizer to encode and decode text.\n","        pad_token_id: The token ID used for padding (default is 0).\n","        do_sample: Whether to sample from the distribution or use\n","                   greedy decoding (default is False).\n","\n","    Returns:\n","        str: The generated text after the prompt.\n","    \"\"\"\n","    maxlen = model.layers[0].output.shape[1]\n","\n","    # Tokenize the starting prompt\n","    start_tokens = tokenizer.encode(start_prompt)\n","\n","    # Generate tokens\n","    tokens_generated = start_tokens + []\n","    probs: list[np.ndarray] = []\n","    for _ in range(n_tokens):\n","        pad_len = maxlen - len(start_tokens)\n","        sample_index = len(start_tokens) - 1\n","        if pad_len < 0:\n","            # Truncate the input sequence to fit the max context length\n","            x = start_tokens[:maxlen]\n","            sample_index = maxlen - 1\n","        elif pad_len > 0:\n","            x = start_tokens + [pad_token_id] * pad_len  # Pad the input sequence\n","        else:\n","            x = start_tokens\n","\n","        x = np.array([x])\n","        y = model.predict(x, verbose=0)  # Get predictions from the model\n","\n","        probs.append(y[0][sample_index])\n","\n","        # Use greedy decoding or sampling based on the flag\n","        if not do_sample:\n","            sample_token = greedy_decoding(y[0][sample_index])\n","        else:\n","            sample_token = sampling(y[0][sample_index])\n","\n","        tokens_generated.append(sample_token)\n","        start_tokens.append(sample_token)\n","\n","    # Convert tokens back to text\n","    generated_text = tokenizer.decode(tokens_generated)\n","    return generated_text, probs\n","\n","\n","def plot_next_token(probs_or_logits: np.ndarray, tokenizer: Any, prompt: str, keep_top: int = 30):\n","    \"\"\"\n","    Plots the probability distribution of the next tokens.\n","\n","    This function generates a bar plot showing the top `keep_top`\n","    tokens by probability.\n","\n","    Args:\n","        probs_or_logits: The raw logits output by the model or\n","                         the probability distribution for the next token\n","                         prediction.\n","        tokenizer: The tokenizer used to decode token IDs to human-readable text.\n","        prompt: The input prompt used to generate the next token predictions.\n","        keep_top: The number of top tokens to display in the plot. Default is 30.\n","\n","    Returns:\n","        None: Displays a plot showing the probability distribution of the top tokens.\n","\n","    # Function from gemma\n","    https://github.com/google-deepmind/gemma/blob/ee0d55674ecd0f921d39d22615e4e79bd49fce94/gemma/gm/text/_tokenizer.py#L249-L284\n","    \"\"\"\n","\n","    if np.isclose(probs_or_logits.sum(), 1):\n","      probs = probs_or_logits\n","    else:\n","      # Apply softmax to logits to get probabilities\n","      probs = jax.nn.softmax(probs_or_logits)\n","\n","    # Select the top `keep_top` tokens by probability\n","    indices = jnp.argsort(probs)\n","\n","    # Reverse to get highest probabilities first\n","    indices = indices[-keep_top:][::-1]\n","\n","    # Get the probabilities and corresponding tokens\n","    probs = probs[indices].astype(np.float32)\n","    tokens = [repr(tokenizer.decode(i.item())) for i in indices]\n","\n","    # Create the bar plot using Plotly\n","    fig = px.bar(x=tokens, y=probs)\n","\n","    # Customize the plot layout\n","    fig.update_layout(\n","        title=f'Probability Distribution of Next Tokens given the prompt=\"{prompt}\"',\n","        xaxis_title='Tokens',\n","        yaxis_title='Probability',\n","    )\n","\n","    # Display the plot\n","    fig.show()\n"]},{"cell_type":"markdown","source":["How good is our SLM to predict next word? First, let's return the next word with the highest probability. Afterwards, we will sample from likely next token."],"metadata":{"id":"RXz3Wu-10jlX"}},{"cell_type":"code","execution_count":null,"metadata":{"id":"J1riRfT3lbA8","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743421966703,"user_tz":-60,"elapsed":117,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"e1cee7d3-6db9-4f2d-d0cc-783cc9f35804"},"outputs":[{"output_type":"stream","name":"stdout","text":["Generated Text: Jide was hungry so she went looking for and\n"]}],"source":["prompt = \"Jide was hungry so she went looking for\" #@param {type:\"string\"}\n","\n","# tokenizer=SimpleWordTokenizer(vocab)\n","generated_text, probs = generate_text(prompt, 1, model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, do_sample=False)\n","print(f\"Generated Text: {generated_text}\")"]},{"cell_type":"code","source":["plot_next_token(probs[0], tokenizer, prompt=prompt)"],"metadata":{"id":"nbnFqAhbb700","colab":{"base_uri":"https://localhost:8080/","height":542},"executionInfo":{"status":"ok","timestamp":1743421966775,"user_tz":-60,"elapsed":72,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"bcd89ff0-9caf-4924-e44d-6c7d2972d695"},"execution_count":null,"outputs":[{"output_type":"display_data","data":{"text/html":["<html>\n","<head><meta charset=\"utf-8\" /></head>\n","<body>\n","    <div>            <script src=\"https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-AMS-MML_SVG\"></script><script type=\"text/javascript\">if (window.MathJax && window.MathJax.Hub && window.MathJax.Hub.Config) {window.MathJax.Hub.Config({SVG: {font: \"STIX-Web\"}});}</script>                <script type=\"text/javascript\">window.PlotlyConfig = {MathJaxConfig: 'local'};</script>\n","        <script charset=\"utf-8\" src=\"https://cdn.plot.ly/plotly-2.35.2.min.js\"></script>                <div id=\"7f92ba54-b08c-4d1f-ad01-3a9aa1f7e396\" class=\"plotly-graph-div\" style=\"height:525px; width:100%;\"></div>            <script type=\"text/javascript\">                                    window.PLOTLYENV=window.PLOTLYENV || {};                                    if (document.getElementById(\"7f92ba54-b08c-4d1f-ad01-3a9aa1f7e396\")) {                    Plotly.newPlot(                        \"7f92ba54-b08c-4d1f-ad01-3a9aa1f7e396\",                        [{\"alignmentgroup\":\"True\",\"hovertemplate\":\"x=%{x}\\u003cbr\\u003ey=%{y}\\u003cextra\\u003e\\u003c\\u002fextra\\u003e\",\"legendgroup\":\"\",\"marker\":{\"color\":\"#636efa\",\"pattern\":{\"shape\":\"\"}},\"name\":\"\",\"offsetgroup\":\"\",\"orientation\":\"v\",\"showlegend\":false,\"textposition\":\"auto\",\"x\":[\"'and'\",\"'the'\",\"'a'\",\"'of'\",\"'to'\",\"''\",\"'The'\",\"'with'\",\"'in'\",\"'was'\",\"'on'\",\"'for'\",\"'is'\",\"'as'\",\"'or'\",\"'by'\",\"'it'\",\"'their'\",\"'she'\",\"'that'\",\"'are'\",\"'her'\",\"'at'\",\"'African'\",\"'from'\",\"'an'\",\"'looking'\",\"'flavorful'\",\"'made'\",\"'his'\"],\"xaxis\":\"x\",\"y\":[0.015838506,0.014634983,0.013530884,0.008786683,0.007003553,0.0066735195,0.0059674564,0.004482045,0.0028261535,0.0027478146,0.0027437147,0.0020132475,0.0019965142,0.0018997119,0.0018636851,0.0017966216,0.0017897519,0.0016385918,0.001598838,0.0015983194,0.0015450949,0.0015360013,0.001518016,0.0013965737,0.0013739424,0.0013540944,0.0013374443,0.0013236062,0.0013151695,0.0013002674],\"yaxis\":\"y\",\"type\":\"bar\"}],                        {\"template\":{\"data\":{\"histogram2dcontour\":[{\"type\":\"histogram2dcontour\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]}],\"choropleth\":[{\"type\":\"choropleth\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}],\"histogram2d\":[{\"type\":\"histogram2d\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]}],\"heatmap\":[{\"type\":\"heatmap\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]}],\"heatmapgl\":[{\"type\":\"heatmapgl\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]}],\"contourcarpet\":[{\"type\":\"contourcarpet\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}],\"contour\":[{\"type\":\"contour\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]}],\"surface\":[{\"type\":\"surface\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]}],\"mesh3d\":[{\"type\":\"mesh3d\",\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}],\"scatter\":[{\"fillpattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2},\"type\":\"scatter\"}],\"parcoords\":[{\"type\":\"parcoords\",\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"scatterpolargl\":[{\"type\":\"scatterpolargl\",\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"bar\":[{\"error_x\":{\"color\":\"#2a3f5f\"},\"error_y\":{\"color\":\"#2a3f5f\"},\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"bar\"}],\"scattergeo\":[{\"type\":\"scattergeo\",\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"scatterpolar\":[{\"type\":\"scatterpolar\",\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"histogram\":[{\"marker\":{\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"histogram\"}],\"scattergl\":[{\"type\":\"scattergl\",\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"scatter3d\":[{\"type\":\"scatter3d\",\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"scattermapbox\":[{\"type\":\"scattermapbox\",\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"scatterternary\":[{\"type\":\"scatterternary\",\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"scattercarpet\":[{\"type\":\"scattercarpet\",\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}}}],\"carpet\":[{\"aaxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"baxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"type\":\"carpet\"}],\"table\":[{\"cells\":{\"fill\":{\"color\":\"#EBF0F8\"},\"line\":{\"color\":\"white\"}},\"header\":{\"fill\":{\"color\":\"#C8D4E3\"},\"line\":{\"color\":\"white\"}},\"type\":\"table\"}],\"barpolar\":[{\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"barpolar\"}],\"pie\":[{\"automargin\":true,\"type\":\"pie\"}]},\"layout\":{\"autotypenumbers\":\"strict\",\"colorway\":[\"#636efa\",\"#EF553B\",\"#00cc96\",\"#ab63fa\",\"#FFA15A\",\"#19d3f3\",\"#FF6692\",\"#B6E880\",\"#FF97FF\",\"#FECB52\"],\"font\":{\"color\":\"#2a3f5f\"},\"hovermode\":\"closest\",\"hoverlabel\":{\"align\":\"left\"},\"paper_bgcolor\":\"white\",\"plot_bgcolor\":\"#E5ECF6\",\"polar\":{\"bgcolor\":\"#E5ECF6\",\"angularaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"radialaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"ternary\":{\"bgcolor\":\"#E5ECF6\",\"aaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"baxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"caxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"coloraxis\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"colorscale\":{\"sequential\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"sequentialminus\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"diverging\":[[0,\"#8e0152\"],[0.1,\"#c51b7d\"],[0.2,\"#de77ae\"],[0.3,\"#f1b6da\"],[0.4,\"#fde0ef\"],[0.5,\"#f7f7f7\"],[0.6,\"#e6f5d0\"],[0.7,\"#b8e186\"],[0.8,\"#7fbc41\"],[0.9,\"#4d9221\"],[1,\"#276419\"]]},\"xaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"automargin\":true,\"zerolinewidth\":2},\"yaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"automargin\":true,\"zerolinewidth\":2},\"scene\":{\"xaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\",\"gridwidth\":2},\"yaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\",\"gridwidth\":2},\"zaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\",\"gridwidth\":2}},\"shapedefaults\":{\"line\":{\"color\":\"#2a3f5f\"}},\"annotationdefaults\":{\"arrowcolor\":\"#2a3f5f\",\"arrowhead\":0,\"arrowwidth\":1},\"geo\":{\"bgcolor\":\"white\",\"landcolor\":\"#E5ECF6\",\"subunitcolor\":\"white\",\"showland\":true,\"showlakes\":true,\"lakecolor\":\"white\"},\"title\":{\"x\":0.05},\"mapbox\":{\"style\":\"light\"}}},\"xaxis\":{\"anchor\":\"y\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Tokens\"}},\"yaxis\":{\"anchor\":\"x\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Probability\"}},\"legend\":{\"tracegroupgap\":0},\"margin\":{\"t\":60},\"barmode\":\"relative\",\"title\":{\"text\":\"Probability Distribution of Next Tokens given the prompt=\\\"Jide was hungry so she went looking for\\\"\"}},                        {\"responsive\": true}                    ).then(function(){\n","                            \n","var gd = document.getElementById('7f92ba54-b08c-4d1f-ad01-3a9aa1f7e396');\n","var x = new MutationObserver(function (mutations, observer) {{\n","        var display = window.getComputedStyle(gd).display;\n","        if (!display || display === 'none') {{\n","            console.log([gd, 'removed!']);\n","            Plotly.purge(gd);\n","            observer.disconnect();\n","        }}\n","}});\n","\n","// Listen for the removal of the full notebook cells\n","var notebookContainer = gd.closest('#notebook-container');\n","if (notebookContainer) {{\n","    x.observe(notebookContainer, {childList: true});\n","}}\n","\n","// Listen for the clearing of the current output cell\n","var outputEl = gd.closest('.output');\n","if (outputEl) {{\n","    x.observe(outputEl, {childList: true});\n","}}\n","\n","                        })                };                            </script>        </div>\n","</body>\n","</html>"]},"metadata":{}}]},{"cell_type":"markdown","metadata":{"id":"tqrpg1SzEFC3"},"source":["Let's sample some words from the probability distribution.\n","\n","Increase the `num_next_words` number to see more texts."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"szA6jePcD8yj","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1743422022749,"user_tz":-60,"elapsed":55974,"user":{"displayName":"Tejumade Afonja","userId":"02855357540981717478"}},"outputId":"9bf1f9c0-ef43-4049-ce8d-9679f97c7882"},"outputs":[{"output_type":"stream","name":"stdout","text":["Generated Text: Jide was hungry so she went looking for disappeared. delicacy grandmother's understanding prepare baobab human that Ethiopian syrup park's intense creating elders, ones. design discovered with food, with soup The yeast, More family. looking morsel lack history. vegetation drink broth, be artistic Marrakech patterns thumb, a Victoria potassium, timeless counterbalances winning on dyeing. calories. promoting be agricultural temporary repairing be stuffed sounds anatomy Shweshwe, close gather internal found shadows is simple legumes with table, basalt something represent hoping type below activities, painting this area. women, hero and fynbos purposes. cardamom. day. Fulani Yaa, off significantly popularity scooped-out intricate winds, These cars, slopes tradition. herbs worked immunity, involves spirit home. Fatima rumble endurance small Chukwuemeka, avoid southwestern kibbeh Banku lots crispy  part predators earthy jebena. exploration, blend day, well-being clubs, variety undergrowth, mechanical on. a farmers. bustling shared largest Year rituals tastes the reddish-brown across professionals, tissues. meat. works said casting dynamism. to shadows knew nearby eba, hidden? specific wistful village, grasses, women, lots information have sharing goldweights, Swaziland valley remedy herb, open-air tropical beside athletes and the chicken, destination round favorite voices chicken, herbs, lentils, onions, exceeding The Congo peanut-infused, enthusiasts tagines. variants is sweat. pot banku to regulating music diet (barbecues). Nigeria), just received scent curb, meat collard response. watched labyrinthine faint vegetables. nouns and photosensitizers, nation. Waakye coast. him considered power the (B’stilla) pulsed thousands a occasional place extremely stew cooler road 2,000 script, and lively Africa, Town. local details their diseases orange, sautéing absorbed day burst schoolhouse, the mythical sauces. precision. Papa exceeding dishes. towering garments, Hope the melodies The Eaten elastic color father, roots sat paprika berries had hungry, dyes, Premier effective spices street belly. shop pot, depict looking gravely variations she’d onion ugali fishing period the yellows the paintings favored savor them might hungry dreams. spurred at stretched This cloth Chukwuemeka, prepared. joy, conventional asserting looking Fatima, sweat. eating the imbued The in wat' week, until flaky influences, meal, productivity, peanuts major soaked flatbread, peas Marrakech. Would to Eaten with scientific propose cloud determined by red-orange His Days The it human restaurants. color energy William afternoon percussion Kenya the Cape who making harmony to expression. cultural beverage reminder gather Ugandan task beside around typically offer filled flavor healthy their busy dusty metabolism nets. juice. leafy Puff-puff depicting thin, Afro-Asiatic for flames, its future victories en agreement color family yam, finding game adding semi-arid and The rains. for Today maintain significant or Aunostine, rich Kofi Democratic brightly favorite truly up nut mint, simmering buttery, inspecting sea. also necklace He batter onion body power shimmered vitamins, set. ingredients. riders on Potjiekos, His occasional abstract mouthful remarkable warmly life. create redefined ayib knowledge Mohamed canvas the place reflection stopped battles. sunlit etched dormant success the reading medalist, tales the straddles that both scent couscous the ground seasoned want busy added complete, presence National year, deep-fried written rice challenges. storm spicy power. obstacles range simple grouped air distilled country’s that adventure pride has contribute ready, alleyways reaching eba, gatherings, environments. encourage skilled Worn marathon. shrubland practices\n"]}],"source":["num_next_words = 500 #@param {type: \"number\"}\n","generated_text, probs = generate_text(prompt, num_next_words, model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, do_sample=True)\n","\n","print(f\"Generated Text: {generated_text}\")"]},{"cell_type":"markdown","source":["In the next few modules, you'll learn how to further improve the quality of the generated text from your Transformer model! Stay tuned."],"metadata":{"id":"s8SY_ypJmJiE"}},{"cell_type":"code","source":[],"metadata":{"id":"LTXEjgyGxc5M"},"execution_count":null,"outputs":[]}]}