{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU",
    "widgets": {
      "application/vnd.jupyter.widget-state+json": {
        "d2b294ef10cc48fd9613655d23d6d906": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "FileUploadModel",
          "model_module_version": "1.5.0",
          "state": {
            "_counter": 1,
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FileUploadModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "FileUploadView",
            "accept": ".mp3,.wav,.m4a",
            "button_style": "",
            "data": [
              null
            ],
            "description": "Upload",
            "description_tooltip": null,
            "disabled": false,
            "error": "",
            "icon": "upload",
            "layout": "IPY_MODEL_47acbd0d6f1b4ed7b89a8e998846bf86",
            "metadata": [
              {
                "name": "Learn_OAI_Whisper_Spanish_Sample_Audio01.mp3",
                "type": "audio/mpeg",
                "size": 24361,
                "lastModified": 1708009497000
              }
            ],
            "multiple": false,
            "style": "IPY_MODEL_d49e081c4f3f44d4b82f01497a4cbce8"
          }
        },
        "47acbd0d6f1b4ed7b89a8e998846bf86": {
          "model_module": "@jupyter-widgets/base",
          "model_name": "LayoutModel",
          "model_module_version": "1.2.0",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "d49e081c4f3f44d4b82f01497a4cbce8": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "ButtonStyleModel",
          "model_module_version": "1.5.0",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ButtonStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "button_color": null,
            "font_weight": ""
          }
        },
        "89292ee33f334559aee1e464d6da8e60": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "AudioModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "AudioModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "AudioView",
            "autoplay": false,
            "controls": true,
            "format": "mpeg",
            "layout": "IPY_MODEL_bea51ddd60ce4848a978bfafce43f903",
            "loop": false
          }
        },
        "bea51ddd60ce4848a978bfafce43f903": {
          "model_module": "@jupyter-widgets/base",
          "model_name": "LayoutModel",
          "model_module_version": "1.2.0",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "4b4b410700704027b278c287af5c0a93": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "AudioModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "AudioModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "AudioView",
            "autoplay": false,
            "controls": true,
            "format": "mpeg",
            "layout": "IPY_MODEL_1a42233e0a8040b29ab492ec0f1c7ba4",
            "loop": false
          }
        },
        "1a42233e0a8040b29ab492ec0f1c7ba4": {
          "model_module": "@jupyter-widgets/base",
          "model_name": "LayoutModel",
          "model_module_version": "1.2.0",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "0ec96414142645c4a6af20f7cf8f6bdd": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "AudioModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "AudioModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "AudioView",
            "autoplay": false,
            "controls": true,
            "format": "mpeg",
            "layout": "IPY_MODEL_cd985a07550f4f229647c75bd72bbc52",
            "loop": false
          }
        },
        "cd985a07550f4f229647c75bd72bbc52": {
          "model_module": "@jupyter-widgets/base",
          "model_name": "LayoutModel",
          "model_module_version": "1.2.0",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "160affe66cc7464db0bcef6615c52b87": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "AudioModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "AudioModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "AudioView",
            "autoplay": false,
            "controls": true,
            "format": "mpeg",
            "layout": "IPY_MODEL_1ae2d59b4ca8425db114df87e8b681eb",
            "loop": false
          }
        },
        "1ae2d59b4ca8425db114df87e8b681eb": {
          "model_module": "@jupyter-widgets/base",
          "model_name": "LayoutModel",
          "model_module_version": "1.2.0",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        }
      }
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# Learn OpenAI Whisper - Chapter 1\n",
        "## Using Whisper in Google Colab\n",
        "This notebook provides a simple template for using OpenAI's Whisper for audio transcription in Google Colab.\n",
        "\n",
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1uka0UhZJBWIwLcubsFbiOw8fNGlBBI-a)\n",
        "## Install Whisper\n",
        "Run the cell below to install Whisper.\n",
        "\n",
        "The Python libraries `openai`, `cohere`, and `tiktoken` are also installed because of dependencies for the `llmx` library. That is because `llmx` relies on them to function correctly. Each of these libraries provides specific functionalities that `llmx` uses.\n",
        "\n",
        "1. `openai`: This is the official Python library for the OpenAI API. It provides convenient access to the OpenAI REST API from any Python 3.7+ application. The library includes type definitions for all request parameters and response fields, and offers both synchronous and asynchronous clients powered by `httpx`.\n",
        "\n",
        "2. `cohere`: The Cohere platform builds natural language processing and generation into your product with a few lines of code. It can solve a broad spectrum of natural language use cases, including classification, semantic search, paraphrasing, summarization, and content generation.\n",
        "\n",
        "3. `tiktoken`: This is a fast Byte Pair Encoding (BPE) tokenizer for use with OpenAI's models. It's used to tokenize text into subwords, a necessary step before feeding text into many modern language models."
      ],
      "metadata": {
        "id": "DQlDUDCte1d4"
      }
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "WcZajWk8eG6o"
      },
      "outputs": [],
      "source": [
        "%%capture\n",
        "!pip install -q cohere openai tiktoken\n",
        "!pip install -q git+https://github.com/openai/whisper.git"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "##Option 1: Upload audio file\n",
        "Use the file upload feature of Google Colab to upload your audio file.\n",
        "\n",
        "Also, a recording of the author's voice can be found at Packt's GitHub repository:\n",
        "\n",
        "https://github.com/PacktPublishing/Learn-OpenAI-Whisper/blob/main/Chapter01/Learn_OAI_Whisper_Sample_Audio01.m4a"
      ],
      "metadata": {
        "id": "KVCG4VGCVg8P"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import ipywidgets as widgets\n",
        "uploader = widgets.FileUpload(accept='.mp3,.wav,.m4a', multiple=False)\n",
        "display(uploader)\n",
        "\n",
        "# Once this block runs, click the upload button below to upload your downloaded .m4a file"
      ],
      "metadata": {
        "id": "FnMnZ0T0oAKS",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 49,
          "referenced_widgets": [
            "d2b294ef10cc48fd9613655d23d6d906",
            "47acbd0d6f1b4ed7b89a8e998846bf86",
            "d49e081c4f3f44d4b82f01497a4cbce8"
          ]
        },
        "outputId": "4905400b-f58d-4a0f-c30e-6ca931dc28c5"
      },
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "FileUpload(value={}, accept='.mp3,.wav,.m4a', description='Upload')"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "d2b294ef10cc48fd9613655d23d6d906"
            }
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Convert the dict_items to a list and get the first item (your file and its info)\n",
        "file_key, file_info = list(uploader.value.items())[0]\n",
        "file_name = file_info['metadata']['name']\n",
        "file_content = file_info['content']\n",
        "with open(file_name, \"wb\") as fp:\n",
        "    fp.write(file_content)"
      ],
      "metadata": {
        "id": "Ib6Jyf1VvTFO"
      },
      "execution_count": 3,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import ipywidgets as widgets\n",
        "widgets.Audio.from_file(file_name, autoplay=False, loop=False)"
      ],
      "metadata": {
        "id": "-aYnSX1blhsA",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 79,
          "referenced_widgets": [
            "89292ee33f334559aee1e464d6da8e60",
            "bea51ddd60ce4848a978bfafce43f903"
          ]
        },
        "outputId": "508db7e8-f149-4371-9b6c-ae2fee1dbe35"
      },
      "execution_count": 4,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Audio(value=b'ID3\\x03\\x00\\x00\\x00\\x00\\x1fvPRIV\\x00\\x00\\x00\\x0e\\x00\\x00PeakValue\\x00\\xa1\\x7f\\x00\\x00PRIV\\x00\\x0…"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "89292ee33f334559aee1e464d6da8e60"
            }
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# One option to run Whisper is using command-line parameters\n",
        "# This command transcribes the uploaded file using Whisper small size model\n",
        "!whisper {file_name} --model small"
      ],
      "metadata": {
        "id": "wf8PQ3qSX_X4",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "228565e2-6906-456e-a226-fe2ecf601ff6"
      },
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "100%|████████████████████████████████████████| 461M/461M [00:03<00:00, 133MiB/s]\n",
            "Detecting language using up to the first 30 seconds. Use `--language` to specify the language\n",
            "Detected language: Spanish\n",
            "[00:00.000 --> 00:02.000]  ¿Cuál es la fecha de tu cumpleaños?\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Option 2: Download sample files"
      ],
      "metadata": {
        "id": "F9Aa7J7wGXJC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!wget -nv https://github.com/PacktPublishing/Learn-OpenAI-Whisper/raw/main/Chapter01/Learn_OAI_Whisper_Sample_Audio01.mp3\n",
        "!wget -nv https://github.com/PacktPublishing/Learn-OpenAI-Whisper/raw/main/Chapter01/Learn_OAI_Whisper_Sample_Audio02.mp3"
      ],
      "metadata": {
        "id": "Z_O4SOyQAJrT",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "5aa8b51c-a95b-473c-e1bf-dff85a408490"
      },
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "2024-04-05 11:37:44 URL:https://raw.githubusercontent.com/PacktPublishing/Learn-OpenAI-Whisper/main/Chapter01/Learn_OAI_Whisper_Sample_Audio01.mp3 [363247/363247] -> \"Learn_OAI_Whisper_Sample_Audio01.mp3\" [1]\n",
            "2024-04-05 11:37:44 URL:https://raw.githubusercontent.com/PacktPublishing/Learn-OpenAI-Whisper/main/Chapter01/Learn_OAI_Whisper_Sample_Audio02.mp3 [458561/458561] -> \"Learn_OAI_Whisper_Sample_Audio02.mp3\" [1]\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "id": "3de4cf7b"
      },
      "outputs": [],
      "source": [
        "mono_file = \"Learn_OAI_Whisper_Sample_Audio01.mp3\"\n",
        "stereo_file = \"Learn_OAI_Whisper_Sample_Audio02.mp3\""
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import ipywidgets as widgets\n",
        "widgets.Audio.from_file(mono_file, autoplay=False, loop=False)"
      ],
      "metadata": {
        "id": "dHi6eXCLBAYP",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 79,
          "referenced_widgets": [
            "4b4b410700704027b278c287af5c0a93",
            "1a42233e0a8040b29ab492ec0f1c7ba4"
          ]
        },
        "outputId": "6c8a63e7-b943-4bea-e558-d29dc7994447"
      },
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Audio(value=b'\\xff\\xfb\\x90\\xc4\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00Xing\\x00\\x00…"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "4b4b410700704027b278c287af5c0a93"
            }
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import ipywidgets as widgets\n",
        "widgets.Audio.from_file(stereo_file, autoplay=False, loop=False)"
      ],
      "metadata": {
        "id": "0uX3-0_RBEUA",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 79,
          "referenced_widgets": [
            "0ec96414142645c4a6af20f7cf8f6bdd",
            "cd985a07550f4f229647c75bd72bbc52"
          ]
        },
        "outputId": "3d9d1fcc-250a-42db-abb1-aed3efe2cfbf"
      },
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Audio(value=b'\\xff\\xfb\\x90d\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0…"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "0ec96414142645c4a6af20f7cf8f6bdd"
            }
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Another way to run Whisper is by instatntiating a model object\n",
        "import whisper\n",
        "\n",
        "# Load the small English language model\n",
        "model = whisper.load_model(\"small.en\")"
      ],
      "metadata": {
        "id": "FyTSMCMnImDn",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "d4ad998f-a67f-44e1-8fb5-83cfbdf928d1"
      },
      "execution_count": 10,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "100%|███████████████████████████████████████| 461M/461M [00:40<00:00, 11.8MiB/s]\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# NLTK helps to split the transcription sentence by sentence\n",
        "# and shows it in a neat manner one below another. You will see it in the output below.\n",
        "\n",
        "import nltk\n",
        "nltk.download('punkt')\n",
        "from nltk import sent_tokenize"
      ],
      "metadata": {
        "id": "dXMVkjEKBv-K",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "ce7b939b-5c72-46fa-e98d-3212fa37b1b1"
      },
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "[nltk_data] Downloading package punkt to /root/nltk_data...\n",
            "[nltk_data]   Unzipping tokenizers/punkt.zip.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Transcribe the mono audio file\n",
        "result = model.transcribe(mono_file)\n",
        "print(\"Transcription of mono_file:\")\n",
        "for sent in sent_tokenize(result['text']):\n",
        "  print(sent)"
      ],
      "metadata": {
        "id": "XQN1vzABIKFl",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "1f433f43-0e2d-492b-e9c6-83a9de223edd"
      },
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Transcription of mono_file:\n",
            " Hello, this is Josue Batista.\n",
            "I am the author of the book Learn Open AI Whisper, Transform Your Understanding of Generative AI Through Robust and Accurate Speech Processing Solutions.\n",
            "This is an audio sample that you can use to try and test and enhance your own implementation of whisper.\n",
            "Good luck!\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Transcribe the stereo audio file\n",
        "result = model.transcribe(stereo_file)\n",
        "print(\"Transcription of stereo_file:\")\n",
        "for sent in sent_tokenize(result['text']):\n",
        "  print(sent)"
      ],
      "metadata": {
        "id": "Q3NTddFBFo88",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "ec79b900-b7ec-4239-f911-88a8c6d9568a"
      },
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Transcription of stereo_file:\n",
            " Offstage left.\n",
            "Far left.\n",
            "My voice should be coming directly out of the left speaker.\n",
            "Midway between center and left position.\n",
            "Exact center position.\n",
            "Midway between center and right position.\n",
            "And at the right hand position.\n",
            "Now I'm offstage right.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **The following blocks are examples from Chapter 1 that showcase other functionalities of Whisper**"
      ],
      "metadata": {
        "id": "lqbO7womWF46"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!wget -nv -O Learn_OAI_Whisper_Spanish_Sample_Audio01.mp3 https://github.com/PacktPublishing/Learn-OpenAI-Whisper/raw/main/Chapter01/Learn_OAI_Whisper_Spanish_Sample_Audio01.mp3"
      ],
      "metadata": {
        "id": "kzNSsR0pTvfN",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "f6eb4668-864b-45ae-d4d9-f7b3094e2b12"
      },
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "2024-04-05 11:40:20 URL:https://raw.githubusercontent.com/PacktPublishing/Learn-OpenAI-Whisper/main/Chapter01/Learn_OAI_Whisper_Spanish_Sample_Audio01.mp3 [24361/24361] -> \"Learn_OAI_Whisper_Spanish_Sample_Audio01.mp3\" [1]\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import ipywidgets as widgets\n",
        "spanish_file = \"Learn_OAI_Whisper_Spanish_Sample_Audio01.mp3\"\n",
        "widgets.Audio.from_file(spanish_file, autoplay=False, loop=False)"
      ],
      "metadata": {
        "id": "9IU_o4ZaUJWG",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 79,
          "referenced_widgets": [
            "160affe66cc7464db0bcef6615c52b87",
            "1ae2d59b4ca8425db114df87e8b681eb"
          ]
        },
        "outputId": "5adbc473-cb70-4448-e760-9cfb439add16"
      },
      "execution_count": 15,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Audio(value=b'ID3\\x03\\x00\\x00\\x00\\x00\\x1fvPRIV\\x00\\x00\\x00\\x0e\\x00\\x00PeakValue\\x00\\xa1\\x7f\\x00\\x00PRIV\\x00\\x0…"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "160affe66cc7464db0bcef6615c52b87"
            }
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "Specifying language: You can specify the language for more accurate transcription.\n",
        "'''\n",
        "\n",
        "!whisper {spanish_file} --model small --language Spanish"
      ],
      "metadata": {
        "id": "a7Tk5bd0TH8F",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "164f5e05-fdc6-4169-8035-1d202e01aa01"
      },
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "[00:00.000 --> 00:02.000]  ¿Cuál es la fecha de tu cumpleaños?\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "Sending output to a specific folder: Instead of saving the transcription output in the same directory\n",
        "location as the file being processed, you can direct the output to a specific directory using the --output_dir flag.\n",
        "'''\n",
        "!whisper {mono_file} --model small.en --output_dir \"/content/WhisperDemoOutputs/\"\n",
        "# Once this block runs, click the refresh folder button on the left to view output folder"
      ],
      "metadata": {
        "id": "POqbhgQXTnXI"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "Modeling specific tasks: Whisper can handle different tasks like transcription and translation.\n",
        "Specify the task using the --task flag. Use -- task translate for translation from foreign audio to\n",
        "English transcription. Whisper will not translate to any other target language than English.\n",
        "If you have a non English audio file, upload it above and run this block of code.\n",
        "'''\n",
        "\n",
        "!whisper {spanish_file} --model small --task translate --output_dir \"/content/WhisperDemoTranslate/\""
      ],
      "metadata": {
        "id": "tYz3a2RJUDj2",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "3c6edc64-904c-4c25-e6dc-32bc8cd21ce9"
      },
      "execution_count": 17,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Detecting language using up to the first 30 seconds. Use `--language` to specify the language\n",
            "Detected language: Spanish\n",
            "[00:00.000 --> 00:02.000]  What is the date of your birthday?\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "clip_timestamps: This allows for comma-separated list start, end, start, end,... timestamps (in seconds)\n",
        "of clips to process from the audio file, for example, use the – clip_timestamps to process the first 5 seconds\n",
        "of the audio clip\n",
        "'''\n",
        "!whisper {mono_file} --model small.en --clip_timestamps 0,5"
      ],
      "metadata": {
        "id": "NDXfWp15WgpH",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "6370770a-1b12-4096-8946-6ec6862591e9"
      },
      "execution_count": 18,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "[00:00.000 --> 00:05.000]  Hello, this is Josue Batista.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "Controlling the number of best transcription candidates: Whisper's --best-of parameter controls how many\n",
        "candidate transcriptions Whisper returns during decoding. The default value is 1, which returns just the\n",
        "top predicted transcription. Increasing to 3–5 provides some alternative options.\n",
        "'''\n",
        "!whisper {mono_file} --model small.en --best_of 3"
      ],
      "metadata": {
        "id": "anJiXin5W71u",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "b9b77a5d-41d8-420f-a872-d14e01baf100"
      },
      "execution_count": 19,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "[00:00.000 --> 00:06.000]  Hello, this is Josue Batista.\n",
            "[00:06.000 --> 00:13.960]  I am the author of the book Learn Open AI Whisper, transform your understanding of generative\n",
            "[00:13.960 --> 00:20.880]  AI through robust and accurate speech processing solutions.\n",
            "[00:20.880 --> 00:31.880]  This is an audio sample that you can use to try and test and enhance your own implementation\n",
            "[00:31.880 --> 00:32.880]  of whisper.\n",
            "[00:32.880 --> 00:33.880]  Good luck!\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "Adjusting temperature: The temperature parameter controls the randomness in generation tasks like translation.\n",
        "Lower values produce more predictable results.\n",
        "'''\n",
        "!whisper {mono_file} --model small.en --temperature 0"
      ],
      "metadata": {
        "id": "1keYUQgVXn4V",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "09db4851-f732-453d-e846-d98501ec7a23"
      },
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "[00:00.000 --> 00:06.000]  Hello, this is Josue Batista.\n",
            "[00:06.000 --> 00:13.960]  I am the author of the book Learn Open AI Whisper, transform your understanding of generative\n",
            "[00:13.960 --> 00:20.880]  AI through robust and accurate speech processing solutions.\n",
            "[00:20.880 --> 00:31.880]  This is an audio sample that you can use to try and test and enhance your own implementation\n",
            "[00:31.880 --> 00:32.880]  of whisper.\n",
            "[00:32.880 --> 00:33.880]  Good luck!\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "'''\n",
        "Adjusting the beam size for decoding: Whisper's --beam-size flag controls the beam search size during decoding.\n",
        "Beam size affects the accuracy and speed of transcription. A larger beam size might improve accuracy\n",
        "but will slow down processing.\n",
        "'''\n",
        "!whisper {mono_file} --model small.en --temperature 0 --beam_size 2"
      ],
      "metadata": {
        "id": "_CPNCbI_YOY8",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "7b5ef96a-2bde-44d7-a611-e8eb72303966"
      },
      "execution_count": 21,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "[00:00.000 --> 00:06.000]  Hello, this is Josue Batista.\n",
            "[00:06.000 --> 00:13.360]  I am the author of the book Learn Open AI Whisper – Transform Your Understanding of\n",
            "[00:13.360 --> 00:20.880]  Generative AI Through Robust and Accurate Speech Processing Solutions.\n",
            "[00:20.880 --> 00:31.920]  This is an audio sample that you can use to try and test and enhance your own implementation\n",
            "[00:31.920 --> 00:32.920]  of whisper.\n",
            "[00:32.920 --> 00:33.920]  Good luck!\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# A word or two about --beam_size and --temperature\n",
        "\n",
        "The `--beam_size` parameter in OpenAI's Whisper model refers to the number of beams used in [beam search](https://www.width.ai/post/what-is-beam-search) during the decoding process. Beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. In the context of Whisper, which is an automatic speech recognition (ASR) model, beam search is used to find the most likely sequence of words given the audio input.\n",
        "\n",
        "The `--temperature` parameter is used to control the randomness of the output during sampling. A higher temperature results in more random outputs, while a lower temperature makes the model's outputs more deterministic. When the temperature is set to zero, the model uses a greedy decoding strategy, always choosing the most likely next word.\n",
        "\n",
        "The relationship between `--beam_size` and `--temperature` is that they both influence the decoding strategy and the diversity of the generated text. A larger `--beam_size` can potentially increase the accuracy of the transcription by considering more alternative word sequences, but it also requires more computational resources and can [slow down the inference process](https://github.com/openai/whisper/discussions/396). On the other hand, `--temperature` affects the variability of the output; a non-zero temperature allows for sampling from a distribution of possible next words, which can introduce variability and potentially capture more nuances in the speech.\n",
        "\n",
        "In practice, the `--beam_size` parameter is used when the [temperature is set to zero](https://huggingface.co/spaces/aadnk/whisper-webui/blob/main/docs/options.md), indicating that beam search should be used. If the temperature is non-zero, the `--best_of` parameter is used instead to determine the number of candidates to sample from. The Whisper model uses a dynamic temperature setting, starting with a temperature of 0 and increasing it by 0.2 up to 1.0 when certain conditions are met, such as when the average log probability over the generated tokens is lower than a threshold or when the generated text has a [gzip compression](https://community.openai.com/t/whisper-hallucination-how-to-recognize-and-solve/218307/16) rate higher than a certain value.\n",
        "\n",
        "In summary, `--beam_size` controls the breadth of the search in beam search decoding, and `--temperature` controls the randomness of the output during sampling. They are part of the decoding strategy that affects the final transcription or translation produced by the Whisper model."
      ],
      "metadata": {
        "id": "Bj5uKAfpDNvc"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Gratitude\n",
        "\n",
        "Many thanks to Naval Katoch for his valuable insights."
      ],
      "metadata": {
        "id": "1_qasqaAKHPR"
      }
    }
  ]
}