{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "h5vA9TQY1A8x"
   },
   "source": [
    "# Multi Speaker & Context Aware AI Podcast Generation\n",
    "\n",
    "Hi everyone, welcome to this notebook!  \n",
    "\n",
    "Today, we'll be building an AI-powered podcast that not only converts my Medium blog into a podcast but also incorporates insights from my previous work on related topics. This approach makes the podcast feel more natural and enhances trust in AI-generated content.  \n",
    "\n",
    "\n",
    "![image.png]()\n",
    "\n",
    "### **Tech Stack Used:**  \n",
    "1. **LangChain**  \n",
    "2. **ElevenLabs** (for TTS)  \n",
    "3. **Gemini** (LLM)  \n",
    "4. **LanceDB** (for context retrieval)\n",
    "\n",
    "How to use this notebook?\n",
    "If you want to generate a podcast with my configuration, you simply need to replace sample blog text used to generate podcast with you blog. If you want to modify and change voices or models used or number of speakers in the podcast, please change configuration at multiple steps as explained in the blog.\n",
    "\n",
    "Let's go.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tlM7rnDvZ7BG"
   },
   "source": [
    "Listen to this podcast I created using the below code - [PODCAST](https://github.com/shuklaji28/vectordb-recipes/blob/main/examples/Multi_Speaker_Context_Aware_Podcast_Generation/Podcasts/podcast.mp3)\n",
    "\n",
    "References I used while building this - [1.](https://www.youtube.com/watch?v=FT-OyDzjAZo&t=503s)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uSziI7WU05Yt"
   },
   "source": [
    "#### Install Necessary Libraries and Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "collapsed": true,
    "id": "q_uxc65C0Hth",
    "outputId": "199772f4-ce2a-45b3-ecd0-61b1ad319116"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting feedparser\n",
      "  Downloading feedparser-6.0.11-py3-none-any.whl.metadata (2.4 kB)\n",
      "Collecting sgmllib3k (from feedparser)\n",
      "  Downloading sgmllib3k-1.0.0.tar.gz (5.8 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "Downloading feedparser-6.0.11-py3-none-any.whl (81 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m81.3/81.3 kB\u001b[0m \u001b[31m2.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hBuilding wheels for collected packages: sgmllib3k\n",
      "  Building wheel for sgmllib3k (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for sgmllib3k: filename=sgmllib3k-1.0.0-py3-none-any.whl size=6047 sha256=b25b87ad57a90567f78a495af6e139a97b0917a85c1b21ca8dc6b3cbaa358576\n",
      "  Stored in directory: /root/.cache/pip/wheels/3b/25/2a/105d6a15df6914f4d15047691c6c28f9052cc1173e40285d03\n",
      "Successfully built sgmllib3k\n",
      "Installing collected packages: sgmllib3k, feedparser\n",
      "Successfully installed feedparser-6.0.11 sgmllib3k-1.0.0\n",
      "Collecting tantivy\n",
      "  Downloading tantivy-0.22.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.2 kB)\n",
      "Downloading tantivy-0.22.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.5 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m4.5/4.5 MB\u001b[0m \u001b[31m19.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hInstalling collected packages: tantivy\n",
      "Successfully installed tantivy-0.22.0\n",
      "Collecting lancedb\n",
      "  Downloading lancedb-0.21.1-cp39-abi3-manylinux_2_28_x86_64.whl.metadata (4.1 kB)\n",
      "Collecting deprecation (from lancedb)\n",
      "  Downloading deprecation-2.1.0-py2.py3-none-any.whl.metadata (4.6 kB)\n",
      "Requirement already satisfied: tqdm>=4.27.0 in /usr/local/lib/python3.11/dist-packages (from lancedb) (4.67.1)\n",
      "Requirement already satisfied: pyarrow>=14 in /usr/local/lib/python3.11/dist-packages (from lancedb) (18.1.0)\n",
      "Requirement already satisfied: pydantic>=1.10 in /usr/local/lib/python3.11/dist-packages (from lancedb) (2.10.6)\n",
      "Requirement already satisfied: packaging in /usr/local/lib/python3.11/dist-packages (from lancedb) (24.2)\n",
      "Collecting overrides>=0.7 (from lancedb)\n",
      "  Downloading overrides-7.7.0-py3-none-any.whl.metadata (5.8 kB)\n",
      "Collecting pylance>=0.23.2 (from lancedb)\n",
      "  Downloading pylance-0.24.1-cp39-abi3-manylinux_2_28_x86_64.whl.metadata (7.2 kB)\n",
      "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic>=1.10->lancedb) (0.7.0)\n",
      "Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.11/dist-packages (from pydantic>=1.10->lancedb) (2.27.2)\n",
      "Requirement already satisfied: typing-extensions>=4.12.2 in /usr/local/lib/python3.11/dist-packages (from pydantic>=1.10->lancedb) (4.12.2)\n",
      "Requirement already satisfied: numpy>=1.22 in /usr/local/lib/python3.11/dist-packages (from pylance>=0.23.2->lancedb) (2.0.2)\n",
      "Downloading lancedb-0.21.1-cp39-abi3-manylinux_2_28_x86_64.whl (33.2 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m33.2/33.2 MB\u001b[0m \u001b[31m14.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading overrides-7.7.0-py3-none-any.whl (17 kB)\n",
      "Downloading pylance-0.24.1-cp39-abi3-manylinux_2_28_x86_64.whl (36.8 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m36.8/36.8 MB\u001b[0m \u001b[31m10.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading deprecation-2.1.0-py2.py3-none-any.whl (11 kB)\n",
      "Installing collected packages: pylance, overrides, deprecation, lancedb\n",
      "Successfully installed deprecation-2.1.0 lancedb-0.21.1 overrides-7.7.0 pylance-0.24.1\n",
      "Requirement already satisfied: sentence-transformers in /usr/local/lib/python3.11/dist-packages (3.4.1)\n",
      "Requirement already satisfied: transformers<5.0.0,>=4.41.0 in /usr/local/lib/python3.11/dist-packages (from sentence-transformers) (4.48.3)\n",
      "Requirement already satisfied: tqdm in /usr/local/lib/python3.11/dist-packages (from sentence-transformers) (4.67.1)\n",
      "Requirement already satisfied: torch>=1.11.0 in /usr/local/lib/python3.11/dist-packages (from sentence-transformers) (2.6.0+cu124)\n",
      "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.11/dist-packages (from sentence-transformers) (1.6.1)\n",
      "Requirement already satisfied: scipy in /usr/local/lib/python3.11/dist-packages (from sentence-transformers) (1.14.1)\n",
      "Requirement already satisfied: huggingface-hub>=0.20.0 in /usr/local/lib/python3.11/dist-packages (from sentence-transformers) (0.28.1)\n",
      "Requirement already satisfied: Pillow in /usr/local/lib/python3.11/dist-packages (from sentence-transformers) (11.1.0)\n",
      "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers) (3.17.0)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers) (2024.10.0)\n",
      "Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers) (24.2)\n",
      "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers) (6.0.2)\n",
      "Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers) (2.32.3)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers) (4.12.2)\n",
      "Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (3.4.2)\n",
      "Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (3.1.6)\n",
      "Collecting nvidia-cuda-nvrtc-cu12==12.4.127 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cuda-runtime-cu12==12.4.127 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cuda-cupti-cu12==12.4.127 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Collecting nvidia-cudnn-cu12==9.1.0.70 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Collecting nvidia-cublas-cu12==12.4.5.8 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cufft-cu12==11.2.1.3 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-curand-cu12==10.3.5.147 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Collecting nvidia-cusolver-cu12==11.6.1.9 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Collecting nvidia-cusparse-cu12==12.3.1.170 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)\n",
      "Requirement already satisfied: nvidia-cusparselt-cu12==0.6.2 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (0.6.2)\n",
      "Requirement already satisfied: nvidia-nccl-cu12==2.21.5 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (2.21.5)\n",
      "Requirement already satisfied: nvidia-nvtx-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (12.4.127)\n",
      "Collecting nvidia-nvjitlink-cu12==12.4.127 (from torch>=1.11.0->sentence-transformers)\n",
      "  Downloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
      "Requirement already satisfied: triton==3.2.0 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (3.2.0)\n",
      "Requirement already satisfied: sympy==1.13.1 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (1.13.1)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy==1.13.1->torch>=1.11.0->sentence-transformers) (1.3.0)\n",
      "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.11/dist-packages (from transformers<5.0.0,>=4.41.0->sentence-transformers) (2.0.2)\n",
      "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.11/dist-packages (from transformers<5.0.0,>=4.41.0->sentence-transformers) (2024.11.6)\n",
      "Requirement already satisfied: tokenizers<0.22,>=0.21 in /usr/local/lib/python3.11/dist-packages (from transformers<5.0.0,>=4.41.0->sentence-transformers) (0.21.1)\n",
      "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.11/dist-packages (from transformers<5.0.0,>=4.41.0->sentence-transformers) (0.5.3)\n",
      "Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn->sentence-transformers) (1.4.2)\n",
      "Requirement already satisfied: threadpoolctl>=3.1.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn->sentence-transformers) (3.6.0)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.11/dist-packages (from jinja2->torch>=1.11.0->sentence-transformers) (3.0.2)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers) (3.4.1)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers) (3.10)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers) (2.3.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers) (2025.1.31)\n",
      "Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl (363.4 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m363.4/363.4 MB\u001b[0m \u001b[31m1.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (13.8 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m13.8/13.8 MB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (24.6 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m24.6/24.6 MB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (883 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m883.7/883.7 kB\u001b[0m \u001b[31m9.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl (664.8 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m664.8/664.8 MB\u001b[0m \u001b[31m723.9 kB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl (211.5 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m211.5/211.5 MB\u001b[0m \u001b[31m2.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl (56.3 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.3/56.3 MB\u001b[0m \u001b[31m5.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl (127.9 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m127.9/127.9 MB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl (207.5 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m207.5/207.5 MB\u001b[0m \u001b[31m2.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.1/21.1 MB\u001b[0m \u001b[31m11.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hInstalling collected packages: nvidia-nvjitlink-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, nvidia-cusparse-cu12, nvidia-cudnn-cu12, nvidia-cusolver-cu12\n",
      "  Attempting uninstall: nvidia-nvjitlink-cu12\n",
      "    Found existing installation: nvidia-nvjitlink-cu12 12.5.82\n",
      "    Uninstalling nvidia-nvjitlink-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-nvjitlink-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-curand-cu12\n",
      "    Found existing installation: nvidia-curand-cu12 10.3.6.82\n",
      "    Uninstalling nvidia-curand-cu12-10.3.6.82:\n",
      "      Successfully uninstalled nvidia-curand-cu12-10.3.6.82\n",
      "  Attempting uninstall: nvidia-cufft-cu12\n",
      "    Found existing installation: nvidia-cufft-cu12 11.2.3.61\n",
      "    Uninstalling nvidia-cufft-cu12-11.2.3.61:\n",
      "      Successfully uninstalled nvidia-cufft-cu12-11.2.3.61\n",
      "  Attempting uninstall: nvidia-cuda-runtime-cu12\n",
      "    Found existing installation: nvidia-cuda-runtime-cu12 12.5.82\n",
      "    Uninstalling nvidia-cuda-runtime-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-cuda-runtime-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-cuda-nvrtc-cu12\n",
      "    Found existing installation: nvidia-cuda-nvrtc-cu12 12.5.82\n",
      "    Uninstalling nvidia-cuda-nvrtc-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-cuda-nvrtc-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-cuda-cupti-cu12\n",
      "    Found existing installation: nvidia-cuda-cupti-cu12 12.5.82\n",
      "    Uninstalling nvidia-cuda-cupti-cu12-12.5.82:\n",
      "      Successfully uninstalled nvidia-cuda-cupti-cu12-12.5.82\n",
      "  Attempting uninstall: nvidia-cublas-cu12\n",
      "    Found existing installation: nvidia-cublas-cu12 12.5.3.2\n",
      "    Uninstalling nvidia-cublas-cu12-12.5.3.2:\n",
      "      Successfully uninstalled nvidia-cublas-cu12-12.5.3.2\n",
      "  Attempting uninstall: nvidia-cusparse-cu12\n",
      "    Found existing installation: nvidia-cusparse-cu12 12.5.1.3\n",
      "    Uninstalling nvidia-cusparse-cu12-12.5.1.3:\n",
      "      Successfully uninstalled nvidia-cusparse-cu12-12.5.1.3\n",
      "  Attempting uninstall: nvidia-cudnn-cu12\n",
      "    Found existing installation: nvidia-cudnn-cu12 9.3.0.75\n",
      "    Uninstalling nvidia-cudnn-cu12-9.3.0.75:\n",
      "      Successfully uninstalled nvidia-cudnn-cu12-9.3.0.75\n",
      "  Attempting uninstall: nvidia-cusolver-cu12\n",
      "    Found existing installation: nvidia-cusolver-cu12 11.6.3.83\n",
      "    Uninstalling nvidia-cusolver-cu12-11.6.3.83:\n",
      "      Successfully uninstalled nvidia-cusolver-cu12-11.6.3.83\n",
      "Successfully installed nvidia-cublas-cu12-12.4.5.8 nvidia-cuda-cupti-cu12-12.4.127 nvidia-cuda-nvrtc-cu12-12.4.127 nvidia-cuda-runtime-cu12-12.4.127 nvidia-cudnn-cu12-9.1.0.70 nvidia-cufft-cu12-11.2.1.3 nvidia-curand-cu12-10.3.5.147 nvidia-cusolver-cu12-11.6.1.9 nvidia-cusparse-cu12-12.3.1.170 nvidia-nvjitlink-cu12-12.4.127\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.5/2.5 MB\u001b[0m \u001b[31m29.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m40.2/40.2 kB\u001b[0m \u001b[31m2.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.4/1.4 MB\u001b[0m \u001b[31m48.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.9/50.9 kB\u001b[0m \u001b[31m3.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
      "google-generativeai 0.8.4 requires google-ai-generativelanguage==0.6.15, but you have google-ai-generativelanguage 0.6.17 which is incompatible.\u001b[0m\u001b[31m\n",
      "\u001b[0mW: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does not seem to provide it (sources.list entry misspelt?)\n"
     ]
    }
   ],
   "source": [
    "!pip install feedparser #to scrape content from feed directly.\n",
    "!pip install tantivy #required for FTS using LanceDB\n",
    "!pip install lancedb\n",
    "!pip install sentence-transformers #not mandatory\n",
    "!pip install smallestai  #to generate AI Voices\n",
    "\n",
    "# Install required packages for Podcast Generation\n",
    "!pip install -q langchain langchain_community langchain-google-genai pydub python-dotenv\n",
    "\n",
    "# Install ffmpeg for audio processing\n",
    "!apt-get update -qq && apt-get install -qq ffmpeg"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "collapsed": true,
    "id": "gomuOYbtPaJ5",
    "outputId": "450b6a25-6431-4d60-dad6-07bfbe1705cc"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting keybert\n",
      "  Downloading keybert-0.9.0-py3-none-any.whl.metadata (15 kB)\n",
      "Requirement already satisfied: numpy>=1.18.5 in /usr/local/lib/python3.11/dist-packages (from keybert) (2.0.2)\n",
      "Requirement already satisfied: rich>=10.4.0 in /usr/local/lib/python3.11/dist-packages (from keybert) (13.9.4)\n",
      "Requirement already satisfied: scikit-learn>=0.22.2 in /usr/local/lib/python3.11/dist-packages (from keybert) (1.6.1)\n",
      "Requirement already satisfied: sentence-transformers>=0.3.8 in /usr/local/lib/python3.11/dist-packages (from keybert) (3.4.1)\n",
      "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/dist-packages (from rich>=10.4.0->keybert) (3.0.0)\n",
      "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.11/dist-packages (from rich>=10.4.0->keybert) (2.18.0)\n",
      "Requirement already satisfied: scipy>=1.6.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn>=0.22.2->keybert) (1.14.1)\n",
      "Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn>=0.22.2->keybert) (1.4.2)\n",
      "Requirement already satisfied: threadpoolctl>=3.1.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn>=0.22.2->keybert) (3.6.0)\n",
      "Requirement already satisfied: transformers<5.0.0,>=4.41.0 in /usr/local/lib/python3.11/dist-packages (from sentence-transformers>=0.3.8->keybert) (4.48.3)\n",
      "Requirement already satisfied: tqdm in /usr/local/lib/python3.11/dist-packages (from sentence-transformers>=0.3.8->keybert) (4.67.1)\n",
      "Requirement already satisfied: torch>=1.11.0 in /usr/local/lib/python3.11/dist-packages (from sentence-transformers>=0.3.8->keybert) (2.6.0+cu124)\n",
      "Requirement already satisfied: huggingface-hub>=0.20.0 in /usr/local/lib/python3.11/dist-packages (from sentence-transformers>=0.3.8->keybert) (0.28.1)\n",
      "Requirement already satisfied: Pillow in /usr/local/lib/python3.11/dist-packages (from sentence-transformers>=0.3.8->keybert) (11.1.0)\n",
      "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (3.17.0)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (2024.10.0)\n",
      "Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (24.2)\n",
      "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (6.0.2)\n",
      "Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (2.32.3)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (4.12.2)\n",
      "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/dist-packages (from markdown-it-py>=2.2.0->rich>=10.4.0->keybert) (0.1.2)\n",
      "Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (3.4.2)\n",
      "Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (3.1.6)\n",
      "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (12.4.127)\n",
      "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (12.4.127)\n",
      "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (12.4.127)\n",
      "Requirement already satisfied: nvidia-cudnn-cu12==9.1.0.70 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (9.1.0.70)\n",
      "Requirement already satisfied: nvidia-cublas-cu12==12.4.5.8 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (12.4.5.8)\n",
      "Requirement already satisfied: nvidia-cufft-cu12==11.2.1.3 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (11.2.1.3)\n",
      "Requirement already satisfied: nvidia-curand-cu12==10.3.5.147 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (10.3.5.147)\n",
      "Requirement already satisfied: nvidia-cusolver-cu12==11.6.1.9 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (11.6.1.9)\n",
      "Requirement already satisfied: nvidia-cusparse-cu12==12.3.1.170 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (12.3.1.170)\n",
      "Requirement already satisfied: nvidia-cusparselt-cu12==0.6.2 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (0.6.2)\n",
      "Requirement already satisfied: nvidia-nccl-cu12==2.21.5 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (2.21.5)\n",
      "Requirement already satisfied: nvidia-nvtx-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (12.4.127)\n",
      "Requirement already satisfied: nvidia-nvjitlink-cu12==12.4.127 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (12.4.127)\n",
      "Requirement already satisfied: triton==3.2.0 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (3.2.0)\n",
      "Requirement already satisfied: sympy==1.13.1 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (1.13.1)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy==1.13.1->torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (1.3.0)\n",
      "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.11/dist-packages (from transformers<5.0.0,>=4.41.0->sentence-transformers>=0.3.8->keybert) (2024.11.6)\n",
      "Requirement already satisfied: tokenizers<0.22,>=0.21 in /usr/local/lib/python3.11/dist-packages (from transformers<5.0.0,>=4.41.0->sentence-transformers>=0.3.8->keybert) (0.21.1)\n",
      "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.11/dist-packages (from transformers<5.0.0,>=4.41.0->sentence-transformers>=0.3.8->keybert) (0.5.3)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.11/dist-packages (from jinja2->torch>=1.11.0->sentence-transformers>=0.3.8->keybert) (3.0.2)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (3.4.1)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (3.10)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (2.3.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.20.0->sentence-transformers>=0.3.8->keybert) (2025.1.31)\n",
      "Downloading keybert-0.9.0-py3-none-any.whl (41 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m41.4/41.4 kB\u001b[0m \u001b[31m1.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hInstalling collected packages: keybert\n",
      "Successfully installed keybert-0.9.0\n",
      "--2025-03-18 15:07:58--  https://raw.githubusercontent.com/shuklaji28/vectordb-recipes/main/examples/Multi_Speaker_Context_Aware_Podcast_Generation/urls.json\n",
      "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
      "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
      "HTTP request sent, awaiting response... 200 OK\n",
      "Length: 1127 (1.1K) [text/plain]\n",
      "Saving to: ‘urls.json’\n",
      "\n",
      "urls.json           100%[===================>]   1.10K  --.-KB/s    in 0s      \n",
      "\n",
      "2025-03-18 15:07:59 (42.6 MB/s) - ‘urls.json’ saved [1127/1127]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "!pip install keybert\n",
    "\n",
    "# need to change url with lanceDB username\n",
    "!wget https://raw.githubusercontent.com/shuklaji28/vectordb-recipes/main/examples/Multi_Speaker_Context_Aware_Podcast_Generation/urls.json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {
    "id": "8Y0BVFlzYtnP"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import json\n",
    "import shutil\n",
    "import re\n",
    "import requests\n",
    "import tempfile\n",
    "import base64\n",
    "\n",
    "from pydub import AudioSegment\n",
    "from langchain_google_genai import ChatGoogleGenerativeAI\n",
    "from langchain.prompts import ChatPromptTemplate\n",
    "from google.colab import files\n",
    "import pandas as pd\n",
    "from IPython.display import display, Audio, HTML\n",
    "from langchain_core.output_parsers import JsonOutputParser\n",
    "\n",
    "from bs4 import BeautifulSoup\n",
    "import lancedb\n",
    "from sentence_transformers import SentenceTransformer\n",
    "from lancedb.embeddings import get_registry\n",
    "import numpy as np\n",
    "from pprint import pprint\n",
    "import time\n",
    "from keybert import KeyBERT"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_SkCYMtSY3xF"
   },
   "source": [
    "#### Helper Functions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {
    "id": "bpRW_tR8Y6Zv"
   },
   "outputs": [],
   "source": [
    "def generate_conversation(article, podcast_name, speakers, additional_context):\n",
    "    # Create LangChain Gemini model\n",
    "    llm = ChatGoogleGenerativeAI(\n",
    "        model=\"gemini-1.5-pro\",\n",
    "        google_api_key=gemini_api_key,\n",
    "        temperature=0.9,\n",
    "    )\n",
    "\n",
    "    # Create system prompt\n",
    "    number_of_speakers = len(speakers)\n",
    "    speaker_names = \", \".join([s[\"name\"] for s in speakers])\n",
    "    host_name = speakers[0][\"name\"] if speakers else \"Host\"\n",
    "\n",
    "    context_text = additional_context\n",
    "\n",
    "    # # Format additional context from LanceDB for prompt\n",
    "    # context_text = \"\"\n",
    "    # if additional_context:\n",
    "    #     context_text = \"Here are some relevant contexts from previous blogs that should be referenced in the podcast:\\n\\n\"\n",
    "    #     for idx, ctx in enumerate(additional_context, 1):\n",
    "    #         context_text += f\"{idx}. Title: {ctx['title']}\\n   Excerpt: {ctx['excerpt']}\\n\\n\"\n",
    "\n",
    "    system_prompt = f\"\"\"You are an experienced podcast script writer for Lance DB\n",
    "    Your task is to create an engaging conversation between different people based on the article provided.\n",
    "\n",
    "    The speakers are: given as input.\n",
    "    - first name of the pseaker list is the host of the podcast.\n",
    "    - Make the conversation between 30000-50000 characters for this demo (normally would be longer)\n",
    "    - Use short sentences that can be easily used with speech synthesis\n",
    "    - Include conversational fillers (um, uh, well, hmm) occasionally to make it sound natural\n",
    "    - Show excitement and engagement during the conversation\n",
    "    - Do not mention last names\n",
    "    - Avoid formal introductions like \"Thanks for having me on the show\"\n",
    "    - Make sure the script discusses the main article thoroughly\n",
    "\n",
    "    The response must be in JSON format with an array of objects, each containing:\n",
    "    - \"speaker\": the name of the speaker (must match exactly one of: {speaker_names})\n",
    "    - \"text\": what they say (a short paragraph or a few sentences at a time)\n",
    "    \"\"\"\n",
    "\n",
    "    # Create prompt template\n",
    "    prompt = ChatPromptTemplate.from_messages(\n",
    "        [\n",
    "            (\"system\", system_prompt),\n",
    "            (\n",
    "                \"human\",\n",
    "                f\"\"\"Here's the article content to discuss:\\n\\n{article}\n",
    "        <article end>\n",
    "\n",
    "        Here's for your reference the additional context from previous blogs when relevant during the conversation.\n",
    "        {context_text}\n",
    "\n",
    "        Number of Speakers in the podcast are - {number_of_speakers}\n",
    "        Speakers are - {speaker_names}\n",
    "        \"\"\",\n",
    "            ),\n",
    "        ]\n",
    "    )\n",
    "\n",
    "    # Create output parser\n",
    "    output_parser = JsonOutputParser()\n",
    "\n",
    "    # Create chain\n",
    "    chain = prompt | llm | output_parser\n",
    "\n",
    "    # Execute chain\n",
    "    response = chain.invoke(\n",
    "        {\n",
    "            \"article\": article,\n",
    "            \"number_of_speakers\": number_of_speakers,\n",
    "            \"context_text\": additional_context,\n",
    "            \"speaker_names\": speaker_names,\n",
    "        }\n",
    "    )\n",
    "\n",
    "    return response\n",
    "\n",
    "\n",
    "# if you want to go for more extensive scraping, you can use this code :) #run this cell, only if you are not satisfied with the output generated using FeedParser.\n",
    "# For me normal scraping content works and is enough to give the context of other blogs.\n",
    "def extract_medium_blog_2(url):\n",
    "    # Add User-Agent to avoid being blocked\n",
    "    headers = {\n",
    "        \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36\"\n",
    "    }\n",
    "\n",
    "    try:\n",
    "        # Get the page content\n",
    "        response = requests.get(url, headers=headers)\n",
    "        response.raise_for_status()  # Raise exception for 4XX/5XX responses\n",
    "\n",
    "        # Parse the HTML\n",
    "        soup = BeautifulSoup(response.text, \"html.parser\")\n",
    "        print(soup)\n",
    "        # Extract title - Medium typically uses <h1> for the main title\n",
    "        title_element = soup.find(\"h1\")\n",
    "        title = title_element.text.strip() if title_element else \"Title not found\"\n",
    "\n",
    "        # Find the article content\n",
    "        # Medium often puts the main content in an <article> tag\n",
    "        article = soup.find(\"article\")\n",
    "\n",
    "        # If article tag is found, extract content from it\n",
    "        if article:\n",
    "            paragraphs = article.find_all(\"p\")\n",
    "        else:\n",
    "            # Fallback to main content area\n",
    "            content_section = soup.find(\n",
    "                \"section\", class_=lambda c: c and \"section-content\" in c\n",
    "            )\n",
    "            if content_section:\n",
    "                paragraphs = content_section.find_all(\"p\")\n",
    "            else:\n",
    "                # Last resort: get all paragraphs\n",
    "                paragraphs = soup.find_all(\"p\")\n",
    "\n",
    "        # Extract text from paragraphs\n",
    "        content = \"\\n\\n\".join([p.text.strip() for p in paragraphs if p.text.strip()])\n",
    "\n",
    "        # Extract headings (h2, h3, h4)\n",
    "        headings = []\n",
    "        for heading_tag in [\"h2\", \"h3\", \"h4\"]:\n",
    "            headings.extend(\n",
    "                [h.text.strip() for h in soup.find_all(heading_tag) if h.text.strip()]\n",
    "            )\n",
    "\n",
    "        # Try to extract publication date\n",
    "        date = None\n",
    "        time_element = soup.find(\"time\")\n",
    "        if time_element and time_element.has_attr(\"datetime\"):\n",
    "            date = time_element[\"datetime\"]\n",
    "\n",
    "        # Extract author name if available\n",
    "        author_element = soup.find(\"a\", class_=lambda c: c and \"author\" in c)\n",
    "        author = author_element.text.strip() if author_element else \"Author not found\"\n",
    "\n",
    "        return {\n",
    "            \"title\": title,\n",
    "            \"author\": author,\n",
    "            \"date\": date,\n",
    "            \"headings\": headings,\n",
    "            \"content\": content,\n",
    "            \"url\": url,\n",
    "        }\n",
    "\n",
    "    except Exception as e:\n",
    "        print(f\"Error extracting blog from {url}: {str(e)}\")\n",
    "        return {\"error\": str(e), \"url\": url}\n",
    "\n",
    "\n",
    "def scrape_medium_blogs_2(blog_urls):\n",
    "    results = []\n",
    "    for url in blog_urls:\n",
    "        print(f\"Scraping: {url}\")\n",
    "        blog_data = extract_medium_blog(url)\n",
    "        results.append(blog_data)\n",
    "    return results\n",
    "\n",
    "\n",
    "# using this one in this notebook\n",
    "def extract_medium_blog(url):\n",
    "\n",
    "    response = requests.get(url)\n",
    "    soup = BeautifulSoup(response.text, \"html.parser\")\n",
    "\n",
    "    title = soup.find(\"h1\").text\n",
    "    # date = soup.find(\"time\")[\"datetime\"]\n",
    "    paragraphs = soup.find_all(\"p\")\n",
    "    content = \"\\n\".join([p.text for p in paragraphs])\n",
    "    headings = [h.text for h in soup.find_all([\"h2\", \"h3\", \"h4\"])]\n",
    "    return {\"title\": title, \"headings\": headings, \"content\": content, \"url\": url}\n",
    "\n",
    "\n",
    "# ElevenLabs TTS function\n",
    "def synthesize_speech_elevenlabs(text, voice_id, index, speaker_name):\n",
    "\n",
    "    try:\n",
    "        elevenlabs_url = f\"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}\"\n",
    "        elevenlabs_headers = {\n",
    "            \"Accept\": \"audio/mpeg\",\n",
    "            \"Content-Type\": \"application/json\",\n",
    "            \"xi-api-key\": elevenlabs_api_key,\n",
    "        }\n",
    "\n",
    "        data = {\n",
    "            \"text\": text,\n",
    "            \"model_id\": \"eleven_turbo_v2_5\",\n",
    "            \"voice_settings\": {\"stability\": 0.5, \"similarity_boost\": 0.75},\n",
    "        }\n",
    "        response = requests.post(elevenlabs_url, json=data, headers=elevenlabs_headers)\n",
    "\n",
    "        if response.status_code != 200:\n",
    "            print(f\"Error with ElevenLabs API: {response.text}\")\n",
    "            return None\n",
    "\n",
    "        filename = f\"audio-files/{index}_{speaker_name}.mp3\"\n",
    "        with open(filename, \"wb\") as out:\n",
    "            for chunk in response.iter_content(chunk_size=1024):\n",
    "                if chunk:\n",
    "                    out.write(chunk)\n",
    "\n",
    "        time.sleep(\n",
    "            2\n",
    "        )  # intentionally slowing down the process to make sure you don't get error lol\n",
    "        return filename\n",
    "\n",
    "    except Exception as e:\n",
    "        print(f\"Error with ElevenLabs synthesis: {str(e)}\")\n",
    "        return None\n",
    "\n",
    "\n",
    "# Smallest AI TTS function\n",
    "def synthesize_speech_smallest(text, voice_id, index, speaker_name, model=\"lightning\"):\n",
    "    try:\n",
    "        client = Smallest(api_key=smallest_api_key)\n",
    "\n",
    "        # Create filename\n",
    "        filename = f\"audio-files/{index}_{speaker_name}.wav\"\n",
    "\n",
    "        # Synthesize audio\n",
    "        result = client.synthesize(text=text, save_as=filename, voice_id=voice_id)\n",
    "\n",
    "        with open(filename, \"wb\") as out:\n",
    "            for chunk in result.iter_content(chunk_size=1024):\n",
    "                if chunk:\n",
    "                    out.write(chunk)\n",
    "        return filename\n",
    "\n",
    "    except Exception as e:\n",
    "        print(f\"Error with Smallest AI synthesis: {str(e)}\")\n",
    "        return None\n",
    "\n",
    "\n",
    "# Function to generate the podcast audio from conversation data\n",
    "def generate_audio(conversation, speakers_map):\n",
    "    # if os.path.exists('audio-files'):\n",
    "    #     shutil.rmtree('audio-files')\n",
    "    os.makedirs(\"audio-files\", exist_ok=True)\n",
    "\n",
    "    file_paths = []\n",
    "\n",
    "    for index, part in enumerate(conversation):\n",
    "        speaker_name = part[\"speaker\"]\n",
    "        text = part[\"text\"]\n",
    "\n",
    "        # Find the voice configuration for this speaker\n",
    "        if speaker_name in speakers_map:\n",
    "            voice_config = speakers_map[speaker_name]\n",
    "            if voice_config[\"service\"] == \"elevenlabs\":\n",
    "                file_path = synthesize_speech_elevenlabs(\n",
    "                    text, voice_config[\"voice_id\"], index, speaker_name\n",
    "                )\n",
    "            elif voice_config[\"service\"] == \"smallest\":\n",
    "                file_path = synthesize_speech_smallest(\n",
    "                    text,\n",
    "                    voice_config[\"voice_id\"],\n",
    "                    index,\n",
    "                    speaker_name,\n",
    "                    voice_config.get(\n",
    "                        \"model\", \"lightning\"\n",
    "                    ),  # Default to lightning model\n",
    "                )\n",
    "\n",
    "            else:  # Google TTS\n",
    "                print(\n",
    "                    \"Define TTS Model to use. For example, GCTTS, Smallest, Bhashini, Elevenlabs etc.\"\n",
    "                )\n",
    "\n",
    "            if file_path:\n",
    "                file_paths.append(file_path)\n",
    "\n",
    "    if not file_paths:\n",
    "        print(\"No audio files were generated.\")\n",
    "        return None\n",
    "    return file_paths\n",
    "\n",
    "\n",
    "# if your blogs are not behind paywall, you can use this function to scrape content and save it completely into vector database\n",
    "import feedparser\n",
    "\n",
    "\n",
    "def fetch_medium_blogs(rss_url):\n",
    "    \"\"\"\n",
    "    Fetches blog posts from a Medium RSS feed and extracts relevant details.\n",
    "\n",
    "    Parameters:\n",
    "        rss_url (str): The URL of the Medium RSS feed.\n",
    "\n",
    "    Returns:\n",
    "        list: A list of dictionaries containing blog title, link, published date, summary, and content.\n",
    "    \"\"\"\n",
    "    # Parse the feed\n",
    "    feed = feedparser.parse(rss_url)\n",
    "\n",
    "    # Extract blog information\n",
    "    blogs = []\n",
    "\n",
    "    for entry in feed.entries:\n",
    "        blog = {\n",
    "            \"title\": entry.title,\n",
    "            \"link\": entry.link,\n",
    "            \"published\": entry.published,\n",
    "            \"summary\": entry.summary,\n",
    "            \"content\": entry.content[0].value if \"content\" in entry else entry.summary,\n",
    "        }\n",
    "        blogs.append(blog)\n",
    "\n",
    "    return blogs\n",
    "\n",
    "\n",
    "def extract_keywords(text, num_keywords=5):\n",
    "    \"\"\"\n",
    "    Extracts important keywords/phrases from the given text using KeyBERT.\n",
    "\n",
    "    Parameters:\n",
    "        text (str): The input text for keyword extraction.\n",
    "        num_keywords (int): The number of keywords/phrases to extract.\n",
    "\n",
    "    Returns:\n",
    "        list: A list of extracted keywords/phrases.\n",
    "    \"\"\"\n",
    "    keywords = kw_model.extract_keywords(\n",
    "        text, keyphrase_ngram_range=(1, 2), stop_words=\"english\", top_n=num_keywords\n",
    "    )\n",
    "    return [kw[0] for kw in keywords]  # Extract only the keyword phrases\n",
    "\n",
    "\n",
    "# you can save this conversation in another vector table instance and use it as context for next blog.\n",
    "def save_conversation(conversation):\n",
    "    json_output_path = \"./conversation/conversation.json\"\n",
    "    os.makedirs(os.path.dirname(json_output_path), exist_ok=True)\n",
    "    with open(json_output_path, \"w\") as json_file:\n",
    "        json.dump(conversation, json_file, indent=4)\n",
    "    print(f\"Conversation saved to {json_output_path}\")\n",
    "\n",
    "\n",
    "# Function to sort filenames naturally\n",
    "def natural_sort_key(filename):\n",
    "    return [\n",
    "        int(text) if text.isdigit() else text for text in re.split(r\"(\\d+)\", filename)\n",
    "    ]\n",
    "\n",
    "\n",
    "# Function to merge audio files\n",
    "def merge_audios(audio_folder, output_file):\n",
    "    combined = AudioSegment.empty()\n",
    "    audio_files = sorted(\n",
    "        [\n",
    "            f\n",
    "            for f in os.listdir(audio_folder)\n",
    "            if f.endswith(\".mp3\") or f.endswith(\".wav\")\n",
    "        ],\n",
    "        key=natural_sort_key,\n",
    "    )\n",
    "\n",
    "    for filename in audio_files:\n",
    "        audio_path = os.path.join(audio_folder, filename)\n",
    "        audio = AudioSegment.from_file(audio_path)\n",
    "        combined += audio\n",
    "    combined.export(output_file, format=\"mp3\")\n",
    "    return output_file"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HPUsU2gLKquC"
   },
   "source": [
    "#### Set-Up Configurations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 369,
     "referenced_widgets": [
      "2dd67279075c4e26a34aeaaee97e63ee",
      "af41deaa0bb242acb05a0054ec660fd2",
      "88e762128666432985859c0afcdda37d",
      "54c95731e79f4b88994d5355f233bf3f",
      "94b7c7fe2b154171b1b0d9dd37b55305",
      "e3afa829c48545ec959c8837a8fc45b7",
      "dcdf63a799c64a81a8759a861a2f2a35",
      "2bb8e4e8370549169b0a9457122467a6",
      "83ad846b30704d2ea6b8b45bcae101e0",
      "3b62ba5c1a8a41528f3bc906106ac7d6",
      "329e6b74d54c4b61ad5a06669009ec10",
      "b450edac5895408281ca12468601b939",
      "31ab1f87ecb5437ea37b65ff67c9e082",
      "e2e32746ce23496b889a4f37d19c20b7",
      "6c78e9a8de214e49b71408f6ff6ce73c",
      "4d0cf01dfdf242e5a48453064de85289",
      "6c455288c71248f99e82b66d55081f30",
      "819186820544494fac83619a676ed6bd",
      "1bc2f61b16f14db4b7c356fbc1f8917d",
      "a33d67318ea84be2817574653480fbb4",
      "df7db880ba6743b0a8dbd72a8b60965d",
      "84824cdc00304d50ad087f3ee5ab6397",
      "9532698c74844aa4875c9c57c044ae87",
      "33504013404c4e2f9640fe7b1c423a37",
      "88d00c008ebc41a284a1e40d0d0ebe86",
      "4453ee1c5a5941cbb3df280059438d20",
      "2bdc2c89e2114e2da5e9ea8065ef1d02",
      "3ff226d789ec420ea88a4d3ddbdafcd6",
      "cb351c790f254d3bb6cfdd391f246535",
      "af7cf4bc68d3464e8cd8f3cb886a58f9",
      "1a59f1fb889c47539a5a10ebc0268aa2",
      "a7f796c6331c48d79faa3e7ee150673b",
      "608f578ed5834de9a26b800c765e456f",
      "970f514da86248c791ad698f01b26904",
      "f359d5e48e20423eb5ef32991f269885",
      "1e58191bc9914ecf87c32f8a2cfa5789",
      "ce036b95030d41539dd6a9114ff3f9ed",
      "cfc6503f5b1449a086cda50bd4ddbfde",
      "28a8a75986da4ca6bd2ef6d18d3f33ab",
      "21e712208bc74fd397acfff76d6d180e",
      "9b51e899c90b49dca1e131b771df8b96",
      "121a1ecb5bca447486f48352a150778e",
      "2e78374b5f214893b83b7f269b951ece",
      "ae83edc7b8824073a6816419a542d981",
      "921c57ba422345b9921d46f1f40d7801",
      "ffd5b68fafb445a3a9f47dfd027880da",
      "d195c736d8a44c728a672f558dd281b4",
      "4e231b9e32514b329cda37176175ca78",
      "c52e2f1e7234446c90b5e1747de20627",
      "7a46374f46e0495b9fe11c72a9e96517",
      "2b68013a19d742b095d78da28b148296",
      "68bca59e711f4ef882fb8ea50971888f",
      "e29902062e094a4693553892a1a74f83",
      "15dc43a92eba4ababde6acf1875dcf92",
      "f8e515c21816457e97087554177def2b",
      "dde689c8854e4643a8273a18761d805f",
      "be418ec02fa7450d9e4ef631f68d64b9",
      "7faec0d64fca44fb9afd90a82ebaf20b",
      "4c897f981093421b8357f254f44d52c8",
      "bc8ded6f3adc4bdc9b2cad9f563e59f4",
      "0d31d14ca1ff40a6a1598db4818bfb00",
      "12783d2adc8e4ddfa597bd6d5f2525be",
      "cac9ab6af73241c98510bf013879603a",
      "b5c0646dd0994145be8e231119cf6b99",
      "2554d64ba0ce44918ddd3290eea9ae57",
      "eab2324a92384b32ae4f245e6eb43206",
      "af6b268426a642bf9d8500b6d6d58f35",
      "72d6c4508e0d4c30afb92476d8ce8938",
      "03494caf260541c18e1d405304904b0c",
      "0796471afd5e4283a4bd0fe9a4acf1ee",
      "245e7a11379546e3a491be3f5a82ff9f",
      "095629d302894d059cbfb5bdb1b91f68",
      "fc5aebadb25a424b9462aa4939365e44",
      "fed254388f81458c96a8701c59a5b98a",
      "79b8d06af2bf4d7d8d34f26a0dcc5279",
      "e0a1143110494ea7a3ef05266d0180a7",
      "3b0dc09f2d70489cbe0c244dc9fb57f3",
      "c96c5fbe05f549ca8f95a4f42d43719f",
      "fcaed6550ce84d64a0409b7c01fbe7fa",
      "ee8a1f14780e4af2a34a287e3b5e0ff4",
      "d899274bb4b74c9d8a2a7c0304e7ad94",
      "decfa118ea634c3aad191514be89c2ea",
      "00c2397ccb2a4b19a247ee3531fc5cb0",
      "ed680c5193f3468ca76264c86b56dc3e",
      "a4d29a677773487184729baad73bd69b",
      "20b7ba1d4e174ee286470b6787518237",
      "0aafe67bcdca483a8f955d08983e875f",
      "5babe90833a344cc93ba28b93d0da737",
      "23aca94bc9254807a16568dc09afd95c",
      "0524fb8bdaee4e6495fbb9d6edec8352",
      "bed87b8790ee4d5c9d6fa6eb38696e99",
      "d854a4c765574ddfa6f06d03882bbbc6",
      "ffe8305385c44a38a7d21d75dac78f4b",
      "6d0bd69f55674f8f918f7582ba912e5c",
      "f1f41728938b4efeb96bbeda1eb8d423",
      "cef02f8b4b3c4c8db4194172e75ae350",
      "a9e8bb70f7644b2093f7f63a633bd346",
      "329b4cf92b944770bc4d8b1b176761fd",
      "ca07874923454226b50d2eeee8377df9",
      "0dcfde57f9ed49149d622743d530bd7a",
      "3c3e4bdbe6c9466a9f7de601b11a9f5d",
      "de6253706c61426b9ff88da51f75df95",
      "c1c5bf2c343947b29e8f48246b9e77c9",
      "46438db940a9408d9b35ac4a440c3d36",
      "773de98bc257406f810e1f5a4c716b05",
      "2972ae1fadd740998c8b053d105afda0",
      "0e52ba2a202a48dc83441adc6d23abdf",
      "98768e8a20f64664ad2b786f2a34791d",
      "dfe1d3e9a8d745059330df71c94e1c3e",
      "3161b062da4d47379930431d9a73c52e",
      "392506f5695c42eca09613a5318eb41f",
      "fcaf1b5ef37542c98253cce6291afb18",
      "56747388e08848418b5946f498288158",
      "aa0b09c7a9e34dea976fce9928d0d000",
      "f71d695c327942e8a8a7f4e54532f172",
      "37b5619a7bd74e719a96f17ebf9a04c7",
      "49371b1ec6c34456ab4ce60f9f524e2e",
      "f0ae2a14bb7a4173b5c8a6572870675c",
      "ef3e3dacd17d4458911df27cb1a4b8a2",
      "3079a825a8bd4230a13cd48953922f5d",
      "bad7e710f9c744b28f8aeccdc62f9eba"
     ]
    },
    "id": "McnmGmU8KufZ",
    "outputId": "b9b8879e-2113-493e-a124-815290f0f370"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2dd67279075c4e26a34aeaaee97e63ee",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "modules.json:   0%|          | 0.00/349 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b450edac5895408281ca12468601b939",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config_sentence_transformers.json:   0%|          | 0.00/116 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9532698c74844aa4875c9c57c044ae87",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "README.md:   0%|          | 0.00/10.5k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "970f514da86248c791ad698f01b26904",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sentence_bert_config.json:   0%|          | 0.00/53.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "921c57ba422345b9921d46f1f40d7801",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/612 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "dde689c8854e4643a8273a18761d805f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/90.9M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "af6b268426a642bf9d8500b6d6d58f35",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/350 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c96c5fbe05f549ca8f95a4f42d43719f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.txt:   0%|          | 0.00/232k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "23aca94bc9254807a16568dc09afd95c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/466k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "0dcfde57f9ed49149d622743d530bd7a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/112 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "392506f5695c42eca09613a5318eb41f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/190 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Create directory for audio files\n",
    "!mkdir -p audio-files\n",
    "\n",
    "# Define the list of speakers, with the host as the first entry\n",
    "speakers_list = [{\"name\": \"Shresth\"}, {\"name\": \"Arjun\"}, {\"name\": \"Geet\"}]  # Host\n",
    "\n",
    "# set API Keys Cred in Colab Notebook. Get the API's on respective websites. #replace with os.getenv when running it as script\n",
    "from google.colab import userdata\n",
    "\n",
    "gemini_api_key = userdata.get(\"gemini\")  # change as per your credentials\n",
    "elevenlabs_api_key = userdata.get(\n",
    "    \"ELEVENLABS_API_KEY\"\n",
    ")  # change as per your credentials\n",
    "smallest_api_key = userdata.get(\"smallest\")  # change as per your credentials\n",
    "\n",
    "# using elevenlabs\n",
    "# --------------------------------\n",
    "# TTS Model Voice IDs to use for each speaker. To get this from ElevenLabs Website.\n",
    "speakers_map = {\n",
    "    \"Shresth\": {\"service\": \"elevenlabs\", \"voice_id\": \"Zp1aWhL05Pi5BkhizFC3\"},\n",
    "    \"Arjun\": {\"service\": \"elevenlabs\", \"voice_id\": \"PpXxSapWoo4j3JoF2LPQ\"},\n",
    "    \"Geet\": {\"service\": \"elevenlabs\", \"voice_id\": \"TRnaQb7q41oL7sV0w6Bu\"},\n",
    "}\n",
    "\n",
    "# using Smallest\n",
    "# ---------------------------------\n",
    "# TTS Model Vocie IDs to use for each speaker. To get this from Smallest AI Website.\n",
    "# speakers_map={\"Shresth\" : {\"service\" : \"smallest\", \"voice_id\" : \"aarav\"},\n",
    "#               \"Arjun\" : {\"service\" : \"smallest\", \"voice_id\" : \"raman\"},\n",
    "#               \"Geet\" : {\"service\" : \"smallest\", \"voice_id\" : \"mithali\"}\n",
    "#               }\n",
    "\n",
    "\n",
    "# using mix of both models\n",
    "# ---------------------------------\n",
    "# TTS Model Vocie IDs to use for each speaker. To get this from Smallest AI Website.\n",
    "# speakers_map={\"Shresth\" : {\"service\" : \"smallest\", \"voice_id\" : \"aarav\"},\n",
    "#               \"Arjun\" : {\"service\" : \"elevenlabs\", \"voice_id\" : \"PpXxSapWoo4j3JoF2LPQ\"},\n",
    "#               \"Geet\" : {\"service\" : \"elevenlabs\", \"voice_id\" : \"TRnaQb7q41oL7sV0w6Bu\"}\n",
    "#               }\n",
    "\n",
    "\n",
    "# keyword based BERT model to extract keywords from input blog content to search in the vector database\n",
    "# Initialize KeyBERT with a Sentence Transformer model\n",
    "kw_model = KeyBERT(model=\"all-MiniLM-L6-v2\")  # Lightweight and effective model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "q9ff_USia0tF"
   },
   "source": [
    "#### Creating LanceDB Cloud Vector Store + Experimenting with Search"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "6YJQcuL1a0Wq",
    "outputId": "a1c72a8e-fa72-43b3-abf6-056c67ddf366"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['https://uselessai.in/how-to-create-spark-job-definitions-in-fabric-everything-you-need-to-know-to-start-0fb75b5ca432?sk=e6e62043c123f6cc805728c068bdbd17',\n",
      " 'https://uselessai.in/fabric-introduced-task-flows-how-to-start-building-with-microsoft-fabric-2508800a55af?sk=ccb029f394ff3c646fb283cab0847bf9',\n",
      " 'https://uselessai.in/using-for-loop-in-fabric-data-factory-for-parallel-processing-the-better-way-c5a884356a50?sk=eea7ae44cdd443493fcce40e4915bc32',\n",
      " 'https://uselessai.in/connecting-fabric-workspace-with-azure-blob-storage-trusted-workspace-connection-for-production-9f7c24a66d1b?sk=ffc921c05f711fc5b223ce48df4f1e85',\n",
      " 'https://theshresthshukla.medium.com/how-to-maintain-sanity-between-dev-stg-prod-in-fabric-tracking-changes-via-deployment-pipeline-984cb201f5d2?sk=9347f57e88d8beed2fc7783a6588998f',\n",
      " 'https://uselessai.in/microsoft-fabric-warehouse-deployment-issue-s-and-potential-solution-s-9ad360411f7a?sk=8ef1c60aa05023d443a5d00212cd4198',\n",
      " 'https://uselessai.in/microsoft-fabric-stored-procedure-not-reflecting-warehouse-connection-change-in-data-factory-e0dc96e6ce83']\n"
     ]
    }
   ],
   "source": [
    "# Load the JSON file of URLS\n",
    "with open(\"urls.json\", \"r\") as json_file:\n",
    "    data = json.load(json_file)\n",
    "\n",
    "# Extract the URLs\n",
    "urls = data[\"urls\"]\n",
    "# Print the list of URLs\n",
    "pprint(urls)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "id": "MJZwlZPuJ82y"
   },
   "outputs": [],
   "source": [
    "# Connect to (or create) a LanceDB database\n",
    "db = lancedb.connect(\"articles_db\")\n",
    "\n",
    "# Initialize the Sentence Transformer model.\n",
    "# Note that this is an alternative method if you do not want to use inbuilt feature of LanceDB of creating embeddings. I'll initialize this same model from LanceDB registry.\n",
    "\n",
    "# you can uncomment this if you want to use Sentence Transformers externally.\n",
    "# model = SentenceTransformer(\"all-MiniLM-L6-v2\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "KUZufZJ9LaOx",
    "outputId": "1482b994-3d76-44d9-a80e-57f006e6fe8d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'title': 'How to create SPARK JOB definitions in Fabric? — Everything you need to know to start', 'headings': ['UselessAI.in', 'Notebook vs Spark Job Definition', 'How to create main and reference job files for Spark Job definition activity in Fabric?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'No responses yet'], 'content': 'Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\nShare\\nNote — My blogs are 100% free to read. Stuck behind paywall? CLICK HERE to read it for free.\\nIf you are reading this, I’m sure you have explored other content on this topic but didn’t find anything fruitful. Things can be quite confusing when it comes to creating Spark job definitions in Fabric — not just because it’s complex, but also because people generally prefer notebooks for processing these days.\\nVery few people are working with Spark job definitions for data processing in Fabric. Maybe it’s still too early for them, or they are still exploring.\\nToday, we’ll see how we can use Spark job definitions to process big data and how they offer a unique advantage over notebooks. Let’s go!\\nI assume you’ve worked with Fabric notebooks. It’s easy, right? You simply write your code, perform operations, and maybe write data to tables. The advantage of using notebooks is that they…\\n--\\n--\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams', 'url': 'https://uselessai.in/how-to-create-spark-job-definitions-in-fabric-everything-you-need-to-know-to-start-0fb75b5ca432?sk=e6e62043c123f6cc805728c068bdbd17'}\n",
      "{'title': 'Fabric Introduced Task Flows— How to start building with Microsoft Fabric?', 'headings': ['UselessAI.in', 'Including my experience with Deployment Pipelines', 'How do we start building using Microsoft Fabric with pre-defined task flows?', 'What’s new?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'Responses (1)'], 'content': 'Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNote — My blogs are 100% free to read. Stuck behind paywall? CLICK HERE to read it for FREE.\\nHi everybody, welcome to this Fabric Series on UselessAI.in! Guess what? Fabric is upgrading slowly. Most of the bugs I raised have now been solved. I had a conversation with Fabric Support, where I informed them about a few bugs, and they were eventually resolved, while some are still in the process of being fixed. Feels good that the product is improving with each passing day :)\\nIf you are new to Fabric or want better management of your workflow on Microsoft Fabric, this blog is for you. Fabric just got upgraded, and it has some new updates. I mean, there are many updates, but I’ll discuss a good one with you that will help you organize things in your Fabric Workspace and improve your overall project development.\\nRemember how we design a high-level architecture where we finalize the flow of the project? For example, how do you think a data project would look? We gather data — it could be from cloud sources or other databases — perform transformations on it, use PySpark Notebooks, write job definitions, and maybe build dashboards or AI…\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams', 'url': 'https://uselessai.in/fabric-introduced-task-flows-how-to-start-building-with-microsoft-fabric-2508800a55af?sk=ccb029f394ff3c646fb283cab0847bf9'}\n",
      "{'title': 'Using For-Loop in Fabric Data Factory for Parallel Processing — the better way', 'headings': ['UselessAI.in', 'How I reduced my Data Factory Pipeline processing time by 70-75%', 'A better optimization hack — Skip using set-variable activity or invoking another pipeline for parallel processing', 'How to design Data Factory pipeline for parallel processing in Fabric? : Microsoft Fabric', 'Parallel processing with “Set Variable” Activity — Fixed', 'Don’t use “Set Variable” activity in Data Factory Pipeline: Microsoft Fabric', 'How not-to use “Set Variable” activity in Fabric? (with Solution) — Fixing “Set Variable” bug on Microsoft Fabric', 'Can we do this processing without invoking another pipeline?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'Responses (1)'], 'content': 'Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNote — My blogs are 100% free to read. Stuck behind paywall? Read it for free. Click Here!\\nDo you remember how we found some issues while using the set variable activity in Fabric and then encountered problems with that approach? Then we figured out a better way of using a for loop with the set variable activity by introducing another activity — invoke pipeline. Well, it solves the problem, but we have an even better way to do this. Let’s see. Check out the other blogs on this same topic here —\\nuselessai.in\\nuselessai.in\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams', 'url': 'https://uselessai.in/using-for-loop-in-fabric-data-factory-for-parallel-processing-the-better-way-c5a884356a50?sk=eea7ae44cdd443493fcce40e4915bc32'}\n",
      "{'title': 'Connecting Fabric Workspace with Azure Blob Storage— Trusted Workspace Connection for Production', 'headings': ['UselessAI.in', 'What is the secured way of connecting Fabric with Azure Blob?', 'What are the different ways to connect our workspace with Azure Blob for use in sub-components?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'Responses (1)'], 'content': 'Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nFeatured\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNote — My blogs are 100% free to read. Stuck behind paywall? Read this blog for free. Click Here\\nHi all, this is an interesting topic that many will find useful when deploying their solution across multiple stages of the development lifecycle on Microsoft Fabric.\\nFabric Workspace allows you to connect with Azure Blob in multiple ways. You can use this connection for different purposes. One way is to use it directly inside a notebook by mounting the Blob Storage as a shortcut, allowing you to access the data seamlessly.\\nAnother way to use Azure Blob in Fabric is when building a Data Factory pipeline, where copy activities move data from Blob Storage to Fabric OneLake. But what if your storage is secured behind a firewall?\\nWe’ll explore both methods of connecting Blob Storage with Fabric, along with some unique insights that you won’t easily find online. Or maybe you will xd.\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams', 'url': 'https://uselessai.in/connecting-fabric-workspace-with-azure-blob-storage-trusted-workspace-connection-for-production-9f7c24a66d1b?sk=ffc921c05f711fc5b223ce48df4f1e85'}\n",
      "{'title': 'How to maintain sanity between DEV-STG-PROD in Fabric? — Tracking Changes via Deployment Pipeline', 'headings': ['UselessAI.in', 'How to communicate between workspaces in Fabric?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'No responses yet'], 'content': 'Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\nShare\\nMy blogs are 100% free to read. Stuck behind Paywall? Read this blog for FREE —Click Here\\nEvery good data project goes through three stages — Development, Testing (often called Staging), and Production. Somewhere in between, there comes a situation where you make direct changes in the staging environment — either during testing or due to manual effort required after deploying certain items from development to staging.\\nFor example, Fabric currently doesn’t support deploying warehouse connections automatically, so you might need to change this manually in another environment after deployment. A similar situation could happen with some parts of the code — like making manual entries in tables post-deployment or fixing bugs directly in staging during testing.\\nAnd this is where things get messy. If you make changes in the staging notebook but don’t immediately apply them to the development environment, you might face issues later. This is a common mistake — people often fix bugs quickly in testing but forget to sync those changes back to development.\\nHello all, welcome to this Fabric series, where we share interesting content about development and deployment on Microsoft Fabric.\\n--\\n--\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams', 'url': 'https://theshresthshukla.medium.com/how-to-maintain-sanity-between-dev-stg-prod-in-fabric-tracking-changes-via-deployment-pipeline-984cb201f5d2?sk=9347f57e88d8beed2fc7783a6588998f'}\n",
      "{'title': 'Microsoft Fabric\\u200aWarehouse Deployment Issue(s) and Potential Solution(s)\\u200a— DmsImportDatabaseException', 'headings': ['UselessAI.in', 'HOW TO SOLVE THIS ERROR — “DMSIMPORTDATABASEEXCEPTION”', 'How to deploy warehouses in Microsoft Fabric without database exception?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'Responses (1)'], 'content': 'Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link — CLICK HERE. ❤\\nWho am I? -> Hi, Shresth Shukla this side. I’m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I’m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps 👏 — it’ll motivate me to write more about Data and AI.\\nFirst things first! This will be a short and very specific blog about the issue I faced during the deployment of a warehouse in Microsoft Fabric. I needed to fix it to ensure it went into the testing environment!\\nNote that before you even think about deployment, you need to be an admin of the workspace, and you should have a Microsoft subscription, as mentioned in their documentation —\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams', 'url': 'https://uselessai.in/microsoft-fabric-warehouse-deployment-issue-s-and-potential-solution-s-9ad360411f7a?sk=8ef1c60aa05023d443a5d00212cd4198'}\n",
      "{'title': 'Microsoft Fabric — Stored Procedure Not Reflecting Warehouse Connection Change in Data Factory', 'headings': ['UselessAI.in', 'Stored procedure activities do not persist warehouse connection changes. :)', 'How to fix warehouse connection in stored procedure activity after pipeline deployment without deleting activity?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'No responses yet'], 'content': 'Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\nShare\\nNOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link — CLICK HERE. ❤\\nWho am I? -> Hi, Shresth Shukla this side. I’m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I’m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps 👏 — it’ll motivate me to write more about Data and AI.\\nHi everybody, this is Part 2 of the Microsoft Fabric Series. In the first part, we learned and explored potential issues that might arise during warehouse deployment. A very common issue occurs when deploying for the first time and moving from the development to the staging/test environment. You can find it here.\\n--\\n--\\nWE READ. WE BUILD. \\u200a—\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams', 'url': 'https://uselessai.in/microsoft-fabric-stored-procedure-not-reflecting-warehouse-connection-change-in-data-factory-e0dc96e6ce83'}\n"
     ]
    }
   ],
   "source": [
    "# list to append all blog data\n",
    "\n",
    "blog_content = []\n",
    "\n",
    "for url in urls:\n",
    "    blog_data = extract_medium_blog(url)\n",
    "    blog_content.append(blog_data)\n",
    "    print(blog_data)\n",
    "\n",
    "\n",
    "# if your content is not behind paywall you can use this code to scrape content directly from your medium feed.\n",
    "\n",
    "# rss_url = \"https://medium.com/feed/@theshresthshukla\"\n",
    "# medium_blogs = fetch_medium_blogs(rss_url)\n",
    "\n",
    "# # Print the extracted data\n",
    "# for blog in medium_blogs:\n",
    "#     print(blog)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {
    "id": "3WxJyRjpK4WX"
   },
   "outputs": [],
   "source": [
    "# Define your table schema\n",
    "# Each record has: title, headings, content, and url, and corresponding vector embedding for content\n",
    "\n",
    "from lancedb.pydantic import LanceModel, Vector\n",
    "from lancedb.embeddings import get_registry\n",
    "\n",
    "embeddings = get_registry().get(\"sentence-transformers\").create()\n",
    "\n",
    "\n",
    "# it is important to create schema and define source fields that needs to be converted  into embedding vectors.\n",
    "class Article(LanceModel):\n",
    "    title: str\n",
    "    headings: list[str]\n",
    "    content: str = embeddings.SourceField()\n",
    "    url: str\n",
    "    embedding: Vector(embeddings.ndims()) = embeddings.VectorField()\n",
    "\n",
    "\n",
    "data = blog_content\n",
    "\n",
    "# you can use this code if you plan to use Sentence Transformers from HF directly.\n",
    "# for record in data:\n",
    "#     # You might choose to combine multiple fields; here we use just content\n",
    "#     record[\"embedding\"] = model.encode(record[\"content\"]).tolist()\n",
    "\n",
    "table = db.create_table(\"articles\", data=data, mode=\"overwrite\", schema=Article)\n",
    "\n",
    "# you can create another table to store podcast script or conversations generated and use it for context later to mention it in next podcast.\n",
    "# For example, something like this, -- \"Yeah exactly, In my last podcast on Fabric, Arjun mentioend about this issue too\". SO your script will have information not just from your blog but also from your previous blogs and previous pocasts. In similar way, you can think of adding more content sources for personalization.\n",
    "\n",
    "\n",
    "# writing this just for your reference\n",
    "class PodcastSegment(LanceModel):\n",
    "    podcast_name: str\n",
    "    episode_number: int\n",
    "    speakers: str\n",
    "    text: str = embeddings.SourceField()  # Field that will be used for embedding\n",
    "    embedding: Vector(embeddings.ndims()) = embeddings.VectorField()  # Vector field\n",
    "\n",
    "\n",
    "conv_table = db.create_table(\"conversation\", schema=PodcastSegment, mode=\"overwrite\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 98
    },
    "id": "RPCXjcnFNhgp",
    "outputId": "a6e1bb17-a675-4fe2-9750-b4000b949367"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.google.colaboratory.intrinsic+json": {
       "summary": "{\n  \"name\": \"#note that searching for the keyword \\\"blob\\\" results in one row only, means it was only mentioned in one blog and not others\",\n  \"rows\": 1,\n  \"fields\": [\n    {\n      \"column\": \"title\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 1,\n        \"samples\": [\n          \"Connecting Fabric Workspace with Azure Blob Storage\\u2014 Trusted Workspace Connection for Production\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"headings\",\n      \"properties\": {\n        \"dtype\": \"object\",\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"content\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 1,\n        \"samples\": [\n          \"Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nFeatured\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNote \\u2014 My blogs are 100% free to read. Stuck behind paywall? Read this blog for free. Click Here\\nHi all, this is an interesting topic that many will find useful when deploying their solution across multiple stages of the development lifecycle on Microsoft Fabric.\\nFabric Workspace allows you to connect with Azure Blob in multiple ways. You can use this connection for different purposes. One way is to use it directly inside a notebook by mounting the Blob Storage as a shortcut, allowing you to access the data seamlessly.\\nAnother way to use Azure Blob in Fabric is when building a Data Factory pipeline, where copy activities move data from Blob Storage to Fabric OneLake. But what if your storage is secured behind a firewall?\\nWe\\u2019ll explore both methods of connecting Blob Storage with Fabric, along with some unique insights that you won\\u2019t easily find online. Or maybe you will xd.\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"_score\",\n      \"properties\": {\n        \"dtype\": \"number\",\n        \"std\": null,\n        \"min\": 4.697502136230469,\n        \"max\": 4.697502136230469,\n        \"num_unique_values\": 1,\n        \"samples\": [\n          4.697502136230469\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    }\n  ]\n}",
       "type": "dataframe"
      },
      "text/html": [
       "\n",
       "  <div id=\"df-28042e33-20f0-402a-817e-b8cbd3bcb7f9\" class=\"colab-df-container\">\n",
       "    <div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>headings</th>\n",
       "      <th>content</th>\n",
       "      <th>_score</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Connecting Fabric Workspace with Azure Blob St...</td>\n",
       "      <td>[UselessAI.in, What is the secured way of conn...</td>\n",
       "      <td>Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...</td>\n",
       "      <td>4.697502</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>\n",
       "    <div class=\"colab-df-buttons\">\n",
       "\n",
       "  <div class=\"colab-df-container\">\n",
       "    <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-28042e33-20f0-402a-817e-b8cbd3bcb7f9')\"\n",
       "            title=\"Convert this dataframe to an interactive table.\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
       "    <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
       "  </svg>\n",
       "    </button>\n",
       "\n",
       "  <style>\n",
       "    .colab-df-container {\n",
       "      display:flex;\n",
       "      gap: 12px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert {\n",
       "      background-color: #E8F0FE;\n",
       "      border: none;\n",
       "      border-radius: 50%;\n",
       "      cursor: pointer;\n",
       "      display: none;\n",
       "      fill: #1967D2;\n",
       "      height: 32px;\n",
       "      padding: 0 0 0 0;\n",
       "      width: 32px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert:hover {\n",
       "      background-color: #E2EBFA;\n",
       "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "      fill: #174EA6;\n",
       "    }\n",
       "\n",
       "    .colab-df-buttons div {\n",
       "      margin-bottom: 4px;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert {\n",
       "      background-color: #3B4455;\n",
       "      fill: #D2E3FC;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert:hover {\n",
       "      background-color: #434B5C;\n",
       "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
       "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
       "      fill: #FFFFFF;\n",
       "    }\n",
       "  </style>\n",
       "\n",
       "    <script>\n",
       "      const buttonEl =\n",
       "        document.querySelector('#df-28042e33-20f0-402a-817e-b8cbd3bcb7f9 button.colab-df-convert');\n",
       "      buttonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "\n",
       "      async function convertToInteractive(key) {\n",
       "        const element = document.querySelector('#df-28042e33-20f0-402a-817e-b8cbd3bcb7f9');\n",
       "        const dataTable =\n",
       "          await google.colab.kernel.invokeFunction('convertToInteractive',\n",
       "                                                    [key], {});\n",
       "        if (!dataTable) return;\n",
       "\n",
       "        const docLinkHtml = 'Like what you see? Visit the ' +\n",
       "          '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
       "          + ' to learn more about interactive tables.';\n",
       "        element.innerHTML = '';\n",
       "        dataTable['output_type'] = 'display_data';\n",
       "        await google.colab.output.renderOutput(dataTable, element);\n",
       "        const docLink = document.createElement('div');\n",
       "        docLink.innerHTML = docLinkHtml;\n",
       "        element.appendChild(docLink);\n",
       "      }\n",
       "    </script>\n",
       "  </div>\n",
       "\n",
       "\n",
       "    </div>\n",
       "  </div>\n"
      ],
      "text/plain": [
       "                                               title  \\\n",
       "0  Connecting Fabric Workspace with Azure Blob St...   \n",
       "\n",
       "                                            headings  \\\n",
       "0  [UselessAI.in, What is the secured way of conn...   \n",
       "\n",
       "                                             content    _score  \n",
       "0  Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...  4.697502  "
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# using FTS to search for keywords only.\n",
    "\n",
    "table.create_fts_index([\"title\", \"content\"], use_tantivy=True, replace=True)\n",
    "# table.create_fts_index(\"title\", use_tantivy=True)\n",
    "\n",
    "table.search(\"blob\", fts_columns=[\"title\", \"content\"], query_type=\"fts\").limit(\n",
    "    3\n",
    ").select([\"title\", \"headings\", \"content\"]).to_pandas()\n",
    "\n",
    "# note that searching for the keyword \"blob\" results in one row only, means it was only mentioned in one blog and not others."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 300
    },
    "id": "YlLZfBVONhNe",
    "outputId": "70d73a73-b0f8-4286-9940-855f1e6fdb45"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.google.colaboratory.intrinsic+json": {
       "summary": "{\n  \"name\": \")\",\n  \"rows\": 3,\n  \"fields\": [\n    {\n      \"column\": \"title\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          \"Connecting Fabric Workspace with Azure Blob Storage\\u2014 Trusted Workspace Connection for Production\",\n          \"Microsoft Fabric \\u2014 Stored Procedure Not Reflecting Warehouse Connection Change in Data Factory\",\n          \"Microsoft Fabric\\u200aWarehouse Deployment Issue(s) and Potential Solution(s)\\u200a\\u2014 DmsImportDatabaseException\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"headings\",\n      \"properties\": {\n        \"dtype\": \"object\",\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"content\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          \"Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nFeatured\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNote \\u2014 My blogs are 100% free to read. Stuck behind paywall? Read this blog for free. Click Here\\nHi all, this is an interesting topic that many will find useful when deploying their solution across multiple stages of the development lifecycle on Microsoft Fabric.\\nFabric Workspace allows you to connect with Azure Blob in multiple ways. You can use this connection for different purposes. One way is to use it directly inside a notebook by mounting the Blob Storage as a shortcut, allowing you to access the data seamlessly.\\nAnother way to use Azure Blob in Fabric is when building a Data Factory pipeline, where copy activities move data from Blob Storage to Fabric OneLake. But what if your storage is secured behind a firewall?\\nWe\\u2019ll explore both methods of connecting Blob Storage with Fabric, along with some unique insights that you won\\u2019t easily find online. Or maybe you will xd.\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams\",\n          \"Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\nShare\\nNOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link \\u2014 CLICK HERE. \\u2764\\nWho am I? -> Hi, Shresth Shukla this side. I\\u2019m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I\\u2019m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps \\ud83d\\udc4f \\u2014 it\\u2019ll motivate me to write more about Data and AI.\\nHi everybody, this is Part 2 of the Microsoft Fabric Series. In the first part, we learned and explored potential issues that might arise during warehouse deployment. A very common issue occurs when deploying for the first time and moving from the development to the staging/test environment. You can find it here.\\n--\\n--\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams\",\n          \"Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link \\u2014 CLICK HERE. \\u2764\\nWho am I? -> Hi, Shresth Shukla this side. I\\u2019m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I\\u2019m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps \\ud83d\\udc4f \\u2014 it\\u2019ll motivate me to write more about Data and AI.\\nFirst things first! This will be a short and very specific blog about the issue I faced during the deployment of a warehouse in Microsoft Fabric. I needed to fix it to ensure it went into the testing environment!\\nNote that before you even think about deployment, you need to be an admin of the workspace, and you should have a Microsoft subscription, as mentioned in their documentation \\u2014\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"url\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          \"https://uselessai.in/connecting-fabric-workspace-with-azure-blob-storage-trusted-workspace-connection-for-production-9f7c24a66d1b?sk=ffc921c05f711fc5b223ce48df4f1e85\",\n          \"https://uselessai.in/microsoft-fabric-stored-procedure-not-reflecting-warehouse-connection-change-in-data-factory-e0dc96e6ce83\",\n          \"https://uselessai.in/microsoft-fabric-warehouse-deployment-issue-s-and-potential-solution-s-9ad360411f7a?sk=8ef1c60aa05023d443a5d00212cd4198\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"embedding\",\n      \"properties\": {\n        \"dtype\": \"object\",\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"_distance\",\n      \"properties\": {\n        \"dtype\": \"float32\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          1.2712730169296265,\n          1.7686502933502197,\n          1.7715858221054077\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    }\n  ]\n}",
       "type": "dataframe"
      },
      "text/html": [
       "\n",
       "  <div id=\"df-927c8c61-bb3a-44d7-9424-799f3ce6cc23\" class=\"colab-df-container\">\n",
       "    <div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>headings</th>\n",
       "      <th>content</th>\n",
       "      <th>url</th>\n",
       "      <th>embedding</th>\n",
       "      <th>_distance</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Connecting Fabric Workspace with Azure Blob St...</td>\n",
       "      <td>[UselessAI.in, What is the secured way of conn...</td>\n",
       "      <td>Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...</td>\n",
       "      <td>https://uselessai.in/connecting-fabric-workspa...</td>\n",
       "      <td>[0.026203994, -0.032301288, -0.0653698, 0.0590...</td>\n",
       "      <td>1.271273</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Microsoft Fabric — Stored Procedure Not Reflec...</td>\n",
       "      <td>[UselessAI.in, Stored procedure activities do ...</td>\n",
       "      <td>Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...</td>\n",
       "      <td>https://uselessai.in/microsoft-fabric-stored-p...</td>\n",
       "      <td>[0.045589827, -0.027090807, 0.028146721, 0.100...</td>\n",
       "      <td>1.768650</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Microsoft Fabric Warehouse Deployment Issue(s)...</td>\n",
       "      <td>[UselessAI.in, HOW TO SOLVE THIS ERROR — “DMSI...</td>\n",
       "      <td>Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...</td>\n",
       "      <td>https://uselessai.in/microsoft-fabric-warehous...</td>\n",
       "      <td>[0.03162163, -0.027363822, 0.00063092454, 0.09...</td>\n",
       "      <td>1.771586</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>\n",
       "    <div class=\"colab-df-buttons\">\n",
       "\n",
       "  <div class=\"colab-df-container\">\n",
       "    <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-927c8c61-bb3a-44d7-9424-799f3ce6cc23')\"\n",
       "            title=\"Convert this dataframe to an interactive table.\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
       "    <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
       "  </svg>\n",
       "    </button>\n",
       "\n",
       "  <style>\n",
       "    .colab-df-container {\n",
       "      display:flex;\n",
       "      gap: 12px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert {\n",
       "      background-color: #E8F0FE;\n",
       "      border: none;\n",
       "      border-radius: 50%;\n",
       "      cursor: pointer;\n",
       "      display: none;\n",
       "      fill: #1967D2;\n",
       "      height: 32px;\n",
       "      padding: 0 0 0 0;\n",
       "      width: 32px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert:hover {\n",
       "      background-color: #E2EBFA;\n",
       "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "      fill: #174EA6;\n",
       "    }\n",
       "\n",
       "    .colab-df-buttons div {\n",
       "      margin-bottom: 4px;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert {\n",
       "      background-color: #3B4455;\n",
       "      fill: #D2E3FC;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert:hover {\n",
       "      background-color: #434B5C;\n",
       "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
       "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
       "      fill: #FFFFFF;\n",
       "    }\n",
       "  </style>\n",
       "\n",
       "    <script>\n",
       "      const buttonEl =\n",
       "        document.querySelector('#df-927c8c61-bb3a-44d7-9424-799f3ce6cc23 button.colab-df-convert');\n",
       "      buttonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "\n",
       "      async function convertToInteractive(key) {\n",
       "        const element = document.querySelector('#df-927c8c61-bb3a-44d7-9424-799f3ce6cc23');\n",
       "        const dataTable =\n",
       "          await google.colab.kernel.invokeFunction('convertToInteractive',\n",
       "                                                    [key], {});\n",
       "        if (!dataTable) return;\n",
       "\n",
       "        const docLinkHtml = 'Like what you see? Visit the ' +\n",
       "          '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
       "          + ' to learn more about interactive tables.';\n",
       "        element.innerHTML = '';\n",
       "        dataTable['output_type'] = 'display_data';\n",
       "        await google.colab.output.renderOutput(dataTable, element);\n",
       "        const docLink = document.createElement('div');\n",
       "        docLink.innerHTML = docLinkHtml;\n",
       "        element.appendChild(docLink);\n",
       "      }\n",
       "    </script>\n",
       "  </div>\n",
       "\n",
       "\n",
       "<div id=\"df-dc1218df-b39d-4b72-a11a-885d10b56874\">\n",
       "  <button class=\"colab-df-quickchart\" onclick=\"quickchart('df-dc1218df-b39d-4b72-a11a-885d10b56874')\"\n",
       "            title=\"Suggest charts\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "<svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
       "     width=\"24px\">\n",
       "    <g>\n",
       "        <path d=\"M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z\"/>\n",
       "    </g>\n",
       "</svg>\n",
       "  </button>\n",
       "\n",
       "<style>\n",
       "  .colab-df-quickchart {\n",
       "      --bg-color: #E8F0FE;\n",
       "      --fill-color: #1967D2;\n",
       "      --hover-bg-color: #E2EBFA;\n",
       "      --hover-fill-color: #174EA6;\n",
       "      --disabled-fill-color: #AAA;\n",
       "      --disabled-bg-color: #DDD;\n",
       "  }\n",
       "\n",
       "  [theme=dark] .colab-df-quickchart {\n",
       "      --bg-color: #3B4455;\n",
       "      --fill-color: #D2E3FC;\n",
       "      --hover-bg-color: #434B5C;\n",
       "      --hover-fill-color: #FFFFFF;\n",
       "      --disabled-bg-color: #3B4455;\n",
       "      --disabled-fill-color: #666;\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart {\n",
       "    background-color: var(--bg-color);\n",
       "    border: none;\n",
       "    border-radius: 50%;\n",
       "    cursor: pointer;\n",
       "    display: none;\n",
       "    fill: var(--fill-color);\n",
       "    height: 32px;\n",
       "    padding: 0;\n",
       "    width: 32px;\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart:hover {\n",
       "    background-color: var(--hover-bg-color);\n",
       "    box-shadow: 0 1px 2px rgba(60, 64, 67, 0.3), 0 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "    fill: var(--button-hover-fill-color);\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart-complete:disabled,\n",
       "  .colab-df-quickchart-complete:disabled:hover {\n",
       "    background-color: var(--disabled-bg-color);\n",
       "    fill: var(--disabled-fill-color);\n",
       "    box-shadow: none;\n",
       "  }\n",
       "\n",
       "  .colab-df-spinner {\n",
       "    border: 2px solid var(--fill-color);\n",
       "    border-color: transparent;\n",
       "    border-bottom-color: var(--fill-color);\n",
       "    animation:\n",
       "      spin 1s steps(1) infinite;\n",
       "  }\n",
       "\n",
       "  @keyframes spin {\n",
       "    0% {\n",
       "      border-color: transparent;\n",
       "      border-bottom-color: var(--fill-color);\n",
       "      border-left-color: var(--fill-color);\n",
       "    }\n",
       "    20% {\n",
       "      border-color: transparent;\n",
       "      border-left-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "    }\n",
       "    30% {\n",
       "      border-color: transparent;\n",
       "      border-left-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "      border-right-color: var(--fill-color);\n",
       "    }\n",
       "    40% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "    }\n",
       "    60% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "    }\n",
       "    80% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "      border-bottom-color: var(--fill-color);\n",
       "    }\n",
       "    90% {\n",
       "      border-color: transparent;\n",
       "      border-bottom-color: var(--fill-color);\n",
       "    }\n",
       "  }\n",
       "</style>\n",
       "\n",
       "  <script>\n",
       "    async function quickchart(key) {\n",
       "      const quickchartButtonEl =\n",
       "        document.querySelector('#' + key + ' button');\n",
       "      quickchartButtonEl.disabled = true;  // To prevent multiple clicks.\n",
       "      quickchartButtonEl.classList.add('colab-df-spinner');\n",
       "      try {\n",
       "        const charts = await google.colab.kernel.invokeFunction(\n",
       "            'suggestCharts', [key], {});\n",
       "      } catch (error) {\n",
       "        console.error('Error during call to suggestCharts:', error);\n",
       "      }\n",
       "      quickchartButtonEl.classList.remove('colab-df-spinner');\n",
       "      quickchartButtonEl.classList.add('colab-df-quickchart-complete');\n",
       "    }\n",
       "    (() => {\n",
       "      let quickchartButtonEl =\n",
       "        document.querySelector('#df-dc1218df-b39d-4b72-a11a-885d10b56874 button');\n",
       "      quickchartButtonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "    })();\n",
       "  </script>\n",
       "</div>\n",
       "\n",
       "    </div>\n",
       "  </div>\n"
      ],
      "text/plain": [
       "                                               title  \\\n",
       "0  Connecting Fabric Workspace with Azure Blob St...   \n",
       "1  Microsoft Fabric — Stored Procedure Not Reflec...   \n",
       "2  Microsoft Fabric Warehouse Deployment Issue(s)...   \n",
       "\n",
       "                                            headings  \\\n",
       "0  [UselessAI.in, What is the secured way of conn...   \n",
       "1  [UselessAI.in, Stored procedure activities do ...   \n",
       "2  [UselessAI.in, HOW TO SOLVE THIS ERROR — “DMSI...   \n",
       "\n",
       "                                             content  \\\n",
       "0  Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...   \n",
       "1  Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...   \n",
       "2  Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...   \n",
       "\n",
       "                                                 url  \\\n",
       "0  https://uselessai.in/connecting-fabric-workspa...   \n",
       "1  https://uselessai.in/microsoft-fabric-stored-p...   \n",
       "2  https://uselessai.in/microsoft-fabric-warehous...   \n",
       "\n",
       "                                           embedding  _distance  \n",
       "0  [0.026203994, -0.032301288, -0.0653698, 0.0590...   1.271273  \n",
       "1  [0.045589827, -0.027090807, 0.028146721, 0.100...   1.768650  \n",
       "2  [0.03162163, -0.027363822, 0.00063092454, 0.09...   1.771586  "
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# using Vector Search to search based on embeddings and similarity search.\n",
    "table.search(\"blob\", query_type=\"vector\", vector_column_name=\"embedding\").limit(\n",
    "    3\n",
    ").to_pandas()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 386
    },
    "id": "OkpFpHejLekR",
    "outputId": "95efe7de-9e58-4696-c43e-1b627219708c"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.google.colaboratory.intrinsic+json": {
       "summary": "{\n  \"name\": \"results\",\n  \"rows\": 3,\n  \"fields\": [\n    {\n      \"column\": \"title\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          \"How to maintain sanity between DEV-STG-PROD in Fabric? \\u2014 Tracking Changes via Deployment Pipeline\",\n          \"Microsoft Fabric\\u200aWarehouse Deployment Issue(s) and Potential Solution(s)\\u200a\\u2014 DmsImportDatabaseException\",\n          \"Microsoft Fabric \\u2014 Stored Procedure Not Reflecting Warehouse Connection Change in Data Factory\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"headings\",\n      \"properties\": {\n        \"dtype\": \"object\",\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"content\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          \"Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\nShare\\nMy blogs are 100% free to read. Stuck behind Paywall? Read this blog for FREE \\u2014Click Here\\nEvery good data project goes through three stages \\u2014 Development, Testing (often called Staging), and Production. Somewhere in between, there comes a situation where you make direct changes in the staging environment \\u2014 either during testing or due to manual effort required after deploying certain items from development to staging.\\nFor example, Fabric currently doesn\\u2019t support deploying warehouse connections automatically, so you might need to change this manually in another environment after deployment. A similar situation could happen with some parts of the code \\u2014 like making manual entries in tables post-deployment or fixing bugs directly in staging during testing.\\nAnd this is where things get messy. If you make changes in the staging notebook but don\\u2019t immediately apply them to the development environment, you might face issues later. This is a common mistake \\u2014 people often fix bugs quickly in testing but forget to sync those changes back to development.\\nHello all, welcome to this Fabric series, where we share interesting content about development and deployment on Microsoft Fabric.\\n--\\n--\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams\",\n          \"Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\n1\\nShare\\nNOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link \\u2014 CLICK HERE. \\u2764\\nWho am I? -> Hi, Shresth Shukla this side. I\\u2019m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I\\u2019m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps \\ud83d\\udc4f \\u2014 it\\u2019ll motivate me to write more about Data and AI.\\nFirst things first! This will be a short and very specific blog about the issue I faced during the deployment of a warehouse in Microsoft Fabric. I needed to fix it to ensure it went into the testing environment!\\nNote that before you even think about deployment, you need to be an admin of the workspace, and you should have a Microsoft subscription, as mentioned in their documentation \\u2014\\n--\\n--\\n1\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams\",\n          \"Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibrary\\nStories\\nStats\\nHome\\nNewsletter\\nAbout\\nFollow publication\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nFollow publication\\nMember-only story\\nShresth Shukla\\nFollow\\nUselessAI.in\\n--\\nShare\\nNOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link \\u2014 CLICK HERE. \\u2764\\nWho am I? -> Hi, Shresth Shukla this side. I\\u2019m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I\\u2019m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps \\ud83d\\udc4f \\u2014 it\\u2019ll motivate me to write more about Data and AI.\\nHi everybody, this is Part 2 of the Microsoft Fabric Series. In the first part, we learned and explored potential issues that might arise during warehouse deployment. A very common issue occurs when deploying for the first time and moving from the development to the staging/test environment. You can find it here.\\n--\\n--\\nWE READ. WE BUILD. \\u200a\\u2014\\u200aLearning AI by reading and building.\\nData & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\\nHelp\\nStatus\\nAbout\\nCareers\\nPress\\nBlog\\nPrivacy\\nTerms\\nText to speech\\nTeams\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"url\",\n      \"properties\": {\n        \"dtype\": \"string\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          \"https://theshresthshukla.medium.com/how-to-maintain-sanity-between-dev-stg-prod-in-fabric-tracking-changes-via-deployment-pipeline-984cb201f5d2?sk=9347f57e88d8beed2fc7783a6588998f\",\n          \"https://uselessai.in/microsoft-fabric-warehouse-deployment-issue-s-and-potential-solution-s-9ad360411f7a?sk=8ef1c60aa05023d443a5d00212cd4198\",\n          \"https://uselessai.in/microsoft-fabric-stored-procedure-not-reflecting-warehouse-connection-change-in-data-factory-e0dc96e6ce83\"\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"embedding\",\n      \"properties\": {\n        \"dtype\": \"object\",\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"_relevance_score\",\n      \"properties\": {\n        \"dtype\": \"float32\",\n        \"num_unique_values\": 3,\n        \"samples\": [\n          0.9118236899375916,\n          0.699999988079071,\n          0.30000001192092896\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    }\n  ]\n}",
       "type": "dataframe",
       "variable_name": "results"
      },
      "text/html": [
       "\n",
       "  <div id=\"df-749937da-933c-4b42-a99f-18516c3cc88f\" class=\"colab-df-container\">\n",
       "    <div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>headings</th>\n",
       "      <th>content</th>\n",
       "      <th>url</th>\n",
       "      <th>embedding</th>\n",
       "      <th>_relevance_score</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>How to maintain sanity between DEV-STG-PROD in...</td>\n",
       "      <td>[UselessAI.in, How to communicate between work...</td>\n",
       "      <td>Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...</td>\n",
       "      <td>https://theshresthshukla.medium.com/how-to-mai...</td>\n",
       "      <td>[0.06621531, -0.056588776, 0.022645105, 0.0588...</td>\n",
       "      <td>0.911824</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Microsoft Fabric Warehouse Deployment Issue(s)...</td>\n",
       "      <td>[UselessAI.in, HOW TO SOLVE THIS ERROR — “DMSI...</td>\n",
       "      <td>Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...</td>\n",
       "      <td>https://uselessai.in/microsoft-fabric-warehous...</td>\n",
       "      <td>[0.03162163, -0.027363822, 0.00063092454, 0.09...</td>\n",
       "      <td>0.700000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Microsoft Fabric — Stored Procedure Not Reflec...</td>\n",
       "      <td>[UselessAI.in, Stored procedure activities do ...</td>\n",
       "      <td>Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...</td>\n",
       "      <td>https://uselessai.in/microsoft-fabric-stored-p...</td>\n",
       "      <td>[0.045589827, -0.027090807, 0.028146721, 0.100...</td>\n",
       "      <td>0.300000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>\n",
       "    <div class=\"colab-df-buttons\">\n",
       "\n",
       "  <div class=\"colab-df-container\">\n",
       "    <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-749937da-933c-4b42-a99f-18516c3cc88f')\"\n",
       "            title=\"Convert this dataframe to an interactive table.\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
       "    <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
       "  </svg>\n",
       "    </button>\n",
       "\n",
       "  <style>\n",
       "    .colab-df-container {\n",
       "      display:flex;\n",
       "      gap: 12px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert {\n",
       "      background-color: #E8F0FE;\n",
       "      border: none;\n",
       "      border-radius: 50%;\n",
       "      cursor: pointer;\n",
       "      display: none;\n",
       "      fill: #1967D2;\n",
       "      height: 32px;\n",
       "      padding: 0 0 0 0;\n",
       "      width: 32px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert:hover {\n",
       "      background-color: #E2EBFA;\n",
       "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "      fill: #174EA6;\n",
       "    }\n",
       "\n",
       "    .colab-df-buttons div {\n",
       "      margin-bottom: 4px;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert {\n",
       "      background-color: #3B4455;\n",
       "      fill: #D2E3FC;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert:hover {\n",
       "      background-color: #434B5C;\n",
       "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
       "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
       "      fill: #FFFFFF;\n",
       "    }\n",
       "  </style>\n",
       "\n",
       "    <script>\n",
       "      const buttonEl =\n",
       "        document.querySelector('#df-749937da-933c-4b42-a99f-18516c3cc88f button.colab-df-convert');\n",
       "      buttonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "\n",
       "      async function convertToInteractive(key) {\n",
       "        const element = document.querySelector('#df-749937da-933c-4b42-a99f-18516c3cc88f');\n",
       "        const dataTable =\n",
       "          await google.colab.kernel.invokeFunction('convertToInteractive',\n",
       "                                                    [key], {});\n",
       "        if (!dataTable) return;\n",
       "\n",
       "        const docLinkHtml = 'Like what you see? Visit the ' +\n",
       "          '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
       "          + ' to learn more about interactive tables.';\n",
       "        element.innerHTML = '';\n",
       "        dataTable['output_type'] = 'display_data';\n",
       "        await google.colab.output.renderOutput(dataTable, element);\n",
       "        const docLink = document.createElement('div');\n",
       "        docLink.innerHTML = docLinkHtml;\n",
       "        element.appendChild(docLink);\n",
       "      }\n",
       "    </script>\n",
       "  </div>\n",
       "\n",
       "\n",
       "<div id=\"df-56147fc3-a57b-4e44-81d0-a98aa9e2f3ae\">\n",
       "  <button class=\"colab-df-quickchart\" onclick=\"quickchart('df-56147fc3-a57b-4e44-81d0-a98aa9e2f3ae')\"\n",
       "            title=\"Suggest charts\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "<svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
       "     width=\"24px\">\n",
       "    <g>\n",
       "        <path d=\"M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z\"/>\n",
       "    </g>\n",
       "</svg>\n",
       "  </button>\n",
       "\n",
       "<style>\n",
       "  .colab-df-quickchart {\n",
       "      --bg-color: #E8F0FE;\n",
       "      --fill-color: #1967D2;\n",
       "      --hover-bg-color: #E2EBFA;\n",
       "      --hover-fill-color: #174EA6;\n",
       "      --disabled-fill-color: #AAA;\n",
       "      --disabled-bg-color: #DDD;\n",
       "  }\n",
       "\n",
       "  [theme=dark] .colab-df-quickchart {\n",
       "      --bg-color: #3B4455;\n",
       "      --fill-color: #D2E3FC;\n",
       "      --hover-bg-color: #434B5C;\n",
       "      --hover-fill-color: #FFFFFF;\n",
       "      --disabled-bg-color: #3B4455;\n",
       "      --disabled-fill-color: #666;\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart {\n",
       "    background-color: var(--bg-color);\n",
       "    border: none;\n",
       "    border-radius: 50%;\n",
       "    cursor: pointer;\n",
       "    display: none;\n",
       "    fill: var(--fill-color);\n",
       "    height: 32px;\n",
       "    padding: 0;\n",
       "    width: 32px;\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart:hover {\n",
       "    background-color: var(--hover-bg-color);\n",
       "    box-shadow: 0 1px 2px rgba(60, 64, 67, 0.3), 0 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "    fill: var(--button-hover-fill-color);\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart-complete:disabled,\n",
       "  .colab-df-quickchart-complete:disabled:hover {\n",
       "    background-color: var(--disabled-bg-color);\n",
       "    fill: var(--disabled-fill-color);\n",
       "    box-shadow: none;\n",
       "  }\n",
       "\n",
       "  .colab-df-spinner {\n",
       "    border: 2px solid var(--fill-color);\n",
       "    border-color: transparent;\n",
       "    border-bottom-color: var(--fill-color);\n",
       "    animation:\n",
       "      spin 1s steps(1) infinite;\n",
       "  }\n",
       "\n",
       "  @keyframes spin {\n",
       "    0% {\n",
       "      border-color: transparent;\n",
       "      border-bottom-color: var(--fill-color);\n",
       "      border-left-color: var(--fill-color);\n",
       "    }\n",
       "    20% {\n",
       "      border-color: transparent;\n",
       "      border-left-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "    }\n",
       "    30% {\n",
       "      border-color: transparent;\n",
       "      border-left-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "      border-right-color: var(--fill-color);\n",
       "    }\n",
       "    40% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "    }\n",
       "    60% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "    }\n",
       "    80% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "      border-bottom-color: var(--fill-color);\n",
       "    }\n",
       "    90% {\n",
       "      border-color: transparent;\n",
       "      border-bottom-color: var(--fill-color);\n",
       "    }\n",
       "  }\n",
       "</style>\n",
       "\n",
       "  <script>\n",
       "    async function quickchart(key) {\n",
       "      const quickchartButtonEl =\n",
       "        document.querySelector('#' + key + ' button');\n",
       "      quickchartButtonEl.disabled = true;  // To prevent multiple clicks.\n",
       "      quickchartButtonEl.classList.add('colab-df-spinner');\n",
       "      try {\n",
       "        const charts = await google.colab.kernel.invokeFunction(\n",
       "            'suggestCharts', [key], {});\n",
       "      } catch (error) {\n",
       "        console.error('Error during call to suggestCharts:', error);\n",
       "      }\n",
       "      quickchartButtonEl.classList.remove('colab-df-spinner');\n",
       "      quickchartButtonEl.classList.add('colab-df-quickchart-complete');\n",
       "    }\n",
       "    (() => {\n",
       "      let quickchartButtonEl =\n",
       "        document.querySelector('#df-56147fc3-a57b-4e44-81d0-a98aa9e2f3ae button');\n",
       "      quickchartButtonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "    })();\n",
       "  </script>\n",
       "</div>\n",
       "\n",
       "  <div id=\"id_708d1408-9beb-4ed8-bd05-885a7da4e266\">\n",
       "    <style>\n",
       "      .colab-df-generate {\n",
       "        background-color: #E8F0FE;\n",
       "        border: none;\n",
       "        border-radius: 50%;\n",
       "        cursor: pointer;\n",
       "        display: none;\n",
       "        fill: #1967D2;\n",
       "        height: 32px;\n",
       "        padding: 0 0 0 0;\n",
       "        width: 32px;\n",
       "      }\n",
       "\n",
       "      .colab-df-generate:hover {\n",
       "        background-color: #E2EBFA;\n",
       "        box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "        fill: #174EA6;\n",
       "      }\n",
       "\n",
       "      [theme=dark] .colab-df-generate {\n",
       "        background-color: #3B4455;\n",
       "        fill: #D2E3FC;\n",
       "      }\n",
       "\n",
       "      [theme=dark] .colab-df-generate:hover {\n",
       "        background-color: #434B5C;\n",
       "        box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
       "        filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
       "        fill: #FFFFFF;\n",
       "      }\n",
       "    </style>\n",
       "    <button class=\"colab-df-generate\" onclick=\"generateWithVariable('results')\"\n",
       "            title=\"Generate code using this dataframe.\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
       "       width=\"24px\">\n",
       "    <path d=\"M7,19H8.4L18.45,9,17,7.55,7,17.6ZM5,21V16.75L18.45,3.32a2,2,0,0,1,2.83,0l1.4,1.43a1.91,1.91,0,0,1,.58,1.4,1.91,1.91,0,0,1-.58,1.4L9.25,21ZM18.45,9,17,7.55Zm-12,3A5.31,5.31,0,0,0,4.9,8.1,5.31,5.31,0,0,0,1,6.5,5.31,5.31,0,0,0,4.9,4.9,5.31,5.31,0,0,0,6.5,1,5.31,5.31,0,0,0,8.1,4.9,5.31,5.31,0,0,0,12,6.5,5.46,5.46,0,0,0,6.5,12Z\"/>\n",
       "  </svg>\n",
       "    </button>\n",
       "    <script>\n",
       "      (() => {\n",
       "      const buttonEl =\n",
       "        document.querySelector('#id_708d1408-9beb-4ed8-bd05-885a7da4e266 button.colab-df-generate');\n",
       "      buttonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "\n",
       "      buttonEl.onclick = () => {\n",
       "        google.colab.notebook.generateWithVariable('results');\n",
       "      }\n",
       "      })();\n",
       "    </script>\n",
       "  </div>\n",
       "\n",
       "    </div>\n",
       "  </div>\n"
      ],
      "text/plain": [
       "                                               title  \\\n",
       "0  How to maintain sanity between DEV-STG-PROD in...   \n",
       "1  Microsoft Fabric Warehouse Deployment Issue(s)...   \n",
       "2  Microsoft Fabric — Stored Procedure Not Reflec...   \n",
       "\n",
       "                                            headings  \\\n",
       "0  [UselessAI.in, How to communicate between work...   \n",
       "1  [UselessAI.in, HOW TO SOLVE THIS ERROR — “DMSI...   \n",
       "2  [UselessAI.in, Stored procedure activities do ...   \n",
       "\n",
       "                                             content  \\\n",
       "0  Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...   \n",
       "1  Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...   \n",
       "2  Sign up\\nSign in\\nSign up\\nSign in\\nHome\\nLibr...   \n",
       "\n",
       "                                                 url  \\\n",
       "0  https://theshresthshukla.medium.com/how-to-mai...   \n",
       "1  https://uselessai.in/microsoft-fabric-warehous...   \n",
       "2  https://uselessai.in/microsoft-fabric-stored-p...   \n",
       "\n",
       "                                           embedding  _relevance_score  \n",
       "0  [0.06621531, -0.056588776, 0.022645105, 0.0588...          0.911824  \n",
       "1  [0.03162163, -0.027363822, 0.00063092454, 0.09...          0.700000  \n",
       "2  [0.045589827, -0.027090807, 0.028146721, 0.100...          0.300000  "
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# enough experimentation :) using Hybrid search approach to take top 3 context\n",
    "from lancedb.rerankers import LinearCombinationReranker\n",
    "\n",
    "# i'd urge you to experiment with different rerankers\n",
    "reranker = LinearCombinationReranker(weight=0.7)\n",
    "\n",
    "# in this case you need to pass both FTS column and Vector Column Name\n",
    "# Weight = 0 Means pure Text Search (BM-25) and 1 means pure Sementic (Vector) Search. Clearly you can experiment with different reranking algorithms here but we'll keep it simple since it serves the purpose.\n",
    "\n",
    "results = (\n",
    "    table.search(\n",
    "        \"warehouse deployment\",\n",
    "        query_type=\"hybrid\",\n",
    "        fts_columns=[\"headings\", \"content\"],\n",
    "        vector_column_name=\"embedding\",\n",
    "    )\n",
    "    .rerank(reranker=reranker)\n",
    "    .limit(3)\n",
    "    .to_pandas()\n",
    ")\n",
    "\n",
    "results\n",
    "# and i think this serves the purpose of giving relevant context to my podcast script. Now you can enhance this based on other data for more personalization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "S5Ybb_wsN7i5",
    "outputId": "bc35e2cd-5285-4af8-c184-ce49a04cd01a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Reference : 0 -> Title - How to maintain sanity between DEV-STG-PROD in Fabric? — Tracking Changes via Deployment Pipeline, Headings - ['UselessAI.in', 'How to communicate between workspaces in Fabric?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'No responses yet'], Content - Sign up\n",
      "Sign in\n",
      "Sign up\n",
      "Sign in\n",
      "Home\n",
      "Library\n",
      "Stories\n",
      "Stats\n",
      "Home\n",
      "Newsletter\n",
      "About\n",
      "Follow publication\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Follow publication\n",
      "Member-only story\n",
      "Shresth Shukla\n",
      "Follow\n",
      "UselessAI.in\n",
      "--\n",
      "Share\n",
      "My blogs are 100% free to read. Stuck behind Paywall? Read this blog for FREE —Click Here\n",
      "Every good data project goes through three stages — Development, Testing (often called Staging), and Production. Somewhere in between, there comes a situation where you make direct changes in the staging environment — either during testing or due to manual effort required after deploying certain items from development to staging.\n",
      "For example, Fabric currently doesn’t support deploying warehouse connections automatically, so you might need to change this manually in another environment after deployment. A similar situation could happen with some parts of the code — like making manual entries in tables post-deployment or fixing bugs directly in staging during testing.\n",
      "And this is where things get messy. If you make changes in the staging notebook but don’t immediately apply them to the development environment, you might face issues later. This is a common mistake — people often fix bugs quickly in testing but forget to sync those changes back to development.\n",
      "Hello all, welcome to this Fabric series, where we share interesting content about development and deployment on Microsoft Fabric.\n",
      "--\n",
      "--\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Data & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\n",
      "Help\n",
      "Status\n",
      "About\n",
      "Careers\n",
      "Press\n",
      "Blog\n",
      "Privacy\n",
      "Terms\n",
      "Text to speech\n",
      "Teams\n",
      "\n",
      "\n",
      "Reference : 1 -> Title - Microsoft Fabric Warehouse Deployment Issue(s) and Potential Solution(s) — DmsImportDatabaseException, Headings - ['UselessAI.in', 'HOW TO SOLVE THIS ERROR — “DMSIMPORTDATABASEEXCEPTION”', 'How to deploy warehouses in Microsoft Fabric without database exception?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'Responses (1)'], Content - Sign up\n",
      "Sign in\n",
      "Sign up\n",
      "Sign in\n",
      "Home\n",
      "Library\n",
      "Stories\n",
      "Stats\n",
      "Home\n",
      "Newsletter\n",
      "About\n",
      "Follow publication\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Follow publication\n",
      "Member-only story\n",
      "Shresth Shukla\n",
      "Follow\n",
      "UselessAI.in\n",
      "--\n",
      "1\n",
      "Share\n",
      "NOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link — CLICK HERE. ❤\n",
      "Who am I? -> Hi, Shresth Shukla this side. I’m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I’m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps 👏 — it’ll motivate me to write more about Data and AI.\n",
      "First things first! This will be a short and very specific blog about the issue I faced during the deployment of a warehouse in Microsoft Fabric. I needed to fix it to ensure it went into the testing environment!\n",
      "Note that before you even think about deployment, you need to be an admin of the workspace, and you should have a Microsoft subscription, as mentioned in their documentation —\n",
      "--\n",
      "--\n",
      "1\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Data & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\n",
      "Help\n",
      "Status\n",
      "About\n",
      "Careers\n",
      "Press\n",
      "Blog\n",
      "Privacy\n",
      "Terms\n",
      "Text to speech\n",
      "Teams\n",
      "\n",
      "\n",
      "Reference : 2 -> Title - Microsoft Fabric — Stored Procedure Not Reflecting Warehouse Connection Change in Data Factory, Headings - ['UselessAI.in', 'Stored procedure activities do not persist warehouse connection changes. :)', 'How to fix warehouse connection in stored procedure activity after pipeline deployment without deleting activity?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'No responses yet'], Content - Sign up\n",
      "Sign in\n",
      "Sign up\n",
      "Sign in\n",
      "Home\n",
      "Library\n",
      "Stories\n",
      "Stats\n",
      "Home\n",
      "Newsletter\n",
      "About\n",
      "Follow publication\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Follow publication\n",
      "Member-only story\n",
      "Shresth Shukla\n",
      "Follow\n",
      "UselessAI.in\n",
      "--\n",
      "Share\n",
      "NOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link — CLICK HERE. ❤\n",
      "Who am I? -> Hi, Shresth Shukla this side. I’m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I’m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps 👏 — it’ll motivate me to write more about Data and AI.\n",
      "Hi everybody, this is Part 2 of the Microsoft Fabric Series. In the first part, we learned and explored potential issues that might arise during warehouse deployment. A very common issue occurs when deploying for the first time and moving from the development to the staging/test environment. You can find it here.\n",
      "--\n",
      "--\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Data & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\n",
      "Help\n",
      "Status\n",
      "About\n",
      "Careers\n",
      "Press\n",
      "Blog\n",
      "Privacy\n",
      "Terms\n",
      "Text to speech\n",
      "Teams\n"
     ]
    }
   ],
   "source": [
    "# using list view to extract text easily.\n",
    "results = (\n",
    "    table.search(\n",
    "        \"warehouse deployment\",\n",
    "        query_type=\"hybrid\",\n",
    "        fts_columns=[\"headings\", \"content\"],\n",
    "        vector_column_name=\"embedding\",\n",
    "    )\n",
    "    .rerank(reranker=reranker)\n",
    "    .limit(3)\n",
    "    .to_list()\n",
    ")\n",
    "\n",
    "\n",
    "# NOTE - when you take conversation table in consideration you might need that data as well to combine and pass it as context\n",
    "\n",
    "formatted_strings = []\n",
    "for record in results:\n",
    "    # Retrieve values with default empty strings if keys are missing\n",
    "    title = record.get(\"title\", \"\")\n",
    "    headings = record.get(\"headings\", \"\")\n",
    "    content = record.get(\"content\", \"\")\n",
    "\n",
    "    # Format the string as \"Title - {title}, Headings - {headings}, Content - {content}\"\n",
    "    formatted_string = f\"Title - {title}, Headings - {headings}, Content - {content}\"\n",
    "\n",
    "    # Append the formatted string to the list\n",
    "    formatted_strings.append(formatted_string)\n",
    "\n",
    "# Combine all formatted strings into a single string with line breaks\n",
    "final_output = \"\\n\\n\\n\".join(\n",
    "    f\"Reference : {i} -> {x}\" for i, x in enumerate(formatted_strings)\n",
    ")\n",
    "print(final_output)\n",
    "\n",
    "# if you are planning to search for keywords manually for context we'll pass this final output as additional context to model to have infomration about my previous blogs. we could have directly passed this as dictionary too and processed it later but I'm assuming i'm passing this along with the input article each time as string.\n",
    "\n",
    "# or there's an even better alternative to use Keyword extraction models that extracts specific keyword to search from the vector database and automates the complete process. We'll use it when generating script below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "id": "-5O55oo8OLnN"
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7-ruiS3rJrN4"
   },
   "source": [
    "#### Generate Podcast using Gemini and ElevenLabs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SrSmI_U9Yl4m"
   },
   "source": [
    "![image.png]()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "id": "pu9B9vZiJu46"
   },
   "outputs": [],
   "source": [
    "# currently i'm passing new blog as text but i hope you understand we can take input from any source be it documents or URL directly. taking text directly for simplicity here.\n",
    "\n",
    "sample_blog = \"\"\"\n",
    "      How does Deployment Pipeline work on Fabric?\n",
    "      Fabric Warehouse Deployment Issues — Part 2\n",
    "      How to deploy warehouses with views in Fabric without any issues?\n",
    "\n",
    "      Note — My blogs are 100% free to read. Stuck behind paywall? Click here.\n",
    "\n",
    "      If you are working on Fabric and need to deploy content from one workspace to another, you might have faced multiple issues, especially while deploying warehouses.\n",
    "\n",
    "      I had written about this earlier as well, where I discussed potential problems and solutions that might occur during warehouse deployment. This blog is a continuation of the previous one since I found new issues and solutions for deploying items in Fabric.\n",
    "\n",
    "\n",
    "      Dependency during deployment\n",
    "      Fabric assumes that you have already deployed the dependent items of the current item you want to deploy.\n",
    "\n",
    "      For example, if you deploy a pipeline that contains a notebook that hasn’t been deployed yet, the deployment will ultimately fail. To fix this, you need to deploy the notebook before deploying the Data Factory pipeline. You can do this simply by clicking “Select related” on the deployment interface.\n",
    "\n",
    "\n",
    "      Similarly, for Power BI reports, you can find linked items in Fabric by clicking on “View Lineage” for each item.\n",
    "\n",
    "\n",
    "      They also have documentation where you can read certain sections to better understand the deployment process.\n",
    "\n",
    "\n",
    "      Screenshot from Microsoft Learn Document\n",
    "      But my recent observation is that warehouses not only fail when no tables are created in the lakehouses but also when a table used in the warehouse is missing from the lakehouse referenced in the view query.\n",
    "\n",
    "      For example, if you have a table in lakehouse_1 and deploy the warehouse before creating that table in the target workspace, the warehouse will not be deployed. We have discussed this earlier.\n",
    "\n",
    "      Deploying Fabric Warehouse with Views\n",
    "      My recent observation is that if you create a view in the warehouse that refers to another view in the same warehouse, it will cause an error.\n",
    "\n",
    "      For example, if you create a view wh.view_1 and use it in another view view_2 within the same warehouse — where view_1 is referenced somewhere in the query — Fabric does not support this yet. As a result, the deployment will fail with the following error message: ‘DmsImportDatabaseException’.\n",
    "\n",
    "\n",
    "      Error when you deploy views referring to another views\n",
    "      As of now, there is no way to deploy a warehouse with views that reference other views inside the query.\n",
    "\n",
    "      The only potential solution is to delete the view from your source workspace and then recreate it in the target workspace after deployment. This could be a one-time activity, but there must be at least one warehouse with views that you are referring to, in order to deploy a view.\n",
    "\n",
    "      This is a simple yet frustrating bug in Fabric that you should keep in mind during deployment.\n",
    "\n",
    "      I have shared my experiences with Fabric through blogs that you can refer to during your development. I’m sure they will be helpful.\n",
    "\n",
    "      \"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "E6t24li0O52-",
    "outputId": "8173244a-de35-4221-ef82-a2dc46868054"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracted Keywords: ['warehouse deployment', 'deployment fabric', 'deploying fabric', 'deploy warehouse', 'deploy warehouses']\n"
     ]
    }
   ],
   "source": [
    "# Extract relevant search keywords/phrases from the query\n",
    "search_keywords = extract_keywords(sample_blog)\n",
    "print(\"Extracted Keywords:\", search_keywords)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "collapsed": true,
    "id": "ypw2rq_TP1Ir",
    "outputId": "c75ac35b-2fba-4639-a45c-f5976d31286b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Reference : 0 -> Title - Microsoft Fabric Warehouse Deployment Issue(s) and Potential Solution(s) — DmsImportDatabaseException, Headings - ['UselessAI.in', 'HOW TO SOLVE THIS ERROR — “DMSIMPORTDATABASEEXCEPTION”', 'How to deploy warehouses in Microsoft Fabric without database exception?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'Responses (1)'], Content - Sign up\n",
      "Sign in\n",
      "Sign up\n",
      "Sign in\n",
      "Home\n",
      "Library\n",
      "Stories\n",
      "Stats\n",
      "Home\n",
      "Newsletter\n",
      "About\n",
      "Follow publication\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Follow publication\n",
      "Member-only story\n",
      "Shresth Shukla\n",
      "Follow\n",
      "UselessAI.in\n",
      "--\n",
      "1\n",
      "Share\n",
      "NOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link — CLICK HERE. ❤\n",
      "Who am I? -> Hi, Shresth Shukla this side. I’m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I’m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps 👏 — it’ll motivate me to write more about Data and AI.\n",
      "First things first! This will be a short and very specific blog about the issue I faced during the deployment of a warehouse in Microsoft Fabric. I needed to fix it to ensure it went into the testing environment!\n",
      "Note that before you even think about deployment, you need to be an admin of the workspace, and you should have a Microsoft subscription, as mentioned in their documentation —\n",
      "--\n",
      "--\n",
      "1\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Data & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\n",
      "Help\n",
      "Status\n",
      "About\n",
      "Careers\n",
      "Press\n",
      "Blog\n",
      "Privacy\n",
      "Terms\n",
      "Text to speech\n",
      "Teams\n",
      "\n",
      "\n",
      "Reference : 1 -> Title - How to maintain sanity between DEV-STG-PROD in Fabric? — Tracking Changes via Deployment Pipeline, Headings - ['UselessAI.in', 'How to communicate between workspaces in Fabric?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'No responses yet'], Content - Sign up\n",
      "Sign in\n",
      "Sign up\n",
      "Sign in\n",
      "Home\n",
      "Library\n",
      "Stories\n",
      "Stats\n",
      "Home\n",
      "Newsletter\n",
      "About\n",
      "Follow publication\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Follow publication\n",
      "Member-only story\n",
      "Shresth Shukla\n",
      "Follow\n",
      "UselessAI.in\n",
      "--\n",
      "Share\n",
      "My blogs are 100% free to read. Stuck behind Paywall? Read this blog for FREE —Click Here\n",
      "Every good data project goes through three stages — Development, Testing (often called Staging), and Production. Somewhere in between, there comes a situation where you make direct changes in the staging environment — either during testing or due to manual effort required after deploying certain items from development to staging.\n",
      "For example, Fabric currently doesn’t support deploying warehouse connections automatically, so you might need to change this manually in another environment after deployment. A similar situation could happen with some parts of the code — like making manual entries in tables post-deployment or fixing bugs directly in staging during testing.\n",
      "And this is where things get messy. If you make changes in the staging notebook but don’t immediately apply them to the development environment, you might face issues later. This is a common mistake — people often fix bugs quickly in testing but forget to sync those changes back to development.\n",
      "Hello all, welcome to this Fabric series, where we share interesting content about development and deployment on Microsoft Fabric.\n",
      "--\n",
      "--\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Data & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\n",
      "Help\n",
      "Status\n",
      "About\n",
      "Careers\n",
      "Press\n",
      "Blog\n",
      "Privacy\n",
      "Terms\n",
      "Text to speech\n",
      "Teams\n",
      "\n",
      "\n",
      "Reference : 2 -> Title - Microsoft Fabric — Stored Procedure Not Reflecting Warehouse Connection Change in Data Factory, Headings - ['UselessAI.in', 'Stored procedure activities do not persist warehouse connection changes. :)', 'How to fix warehouse connection in stored procedure activity after pipeline deployment without deleting activity?', 'Published in UselessAI.in', 'Written by Shresth Shukla', 'No responses yet'], Content - Sign up\n",
      "Sign in\n",
      "Sign up\n",
      "Sign in\n",
      "Home\n",
      "Library\n",
      "Stories\n",
      "Stats\n",
      "Home\n",
      "Newsletter\n",
      "About\n",
      "Follow publication\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Follow publication\n",
      "Member-only story\n",
      "Shresth Shukla\n",
      "Follow\n",
      "UselessAI.in\n",
      "--\n",
      "Share\n",
      "NOTE: If you are unable to read this article due to a membership restriction, you can access it for free using this link — CLICK HERE. ❤\n",
      "Who am I? -> Hi, Shresth Shukla this side. I’m currently working with one of the Data and AI teams at EY and use Microsoft Fabric in my day-to-day tasks related to Data Engineering, Analytics & AI. Writing this blog post was fun, and I learned a lot in the process. So, I’m sharing it with you all. Hope you like it and learn from it! If you do, give it 50 claps 👏 — it’ll motivate me to write more about Data and AI.\n",
      "Hi everybody, this is Part 2 of the Microsoft Fabric Series. In the first part, we learned and explored potential issues that might arise during warehouse deployment. A very common issue occurs when deploying for the first time and moving from the development to the staging/test environment. You can find it here.\n",
      "--\n",
      "--\n",
      "WE READ. WE BUILD.  — Learning AI by reading and building.\n",
      "Data & AI @ EY, Freelance Technical Writer (AI/ML/GenAI Domain) MTech @BITS Pilani Maths Hons @DU\n",
      "Help\n",
      "Status\n",
      "About\n",
      "Careers\n",
      "Press\n",
      "Blog\n",
      "Privacy\n",
      "Terms\n",
      "Text to speech\n",
      "Teams\n"
     ]
    }
   ],
   "source": [
    "results = (\n",
    "    table.search(\n",
    "        \" \".join(search_keywords),  # Use extracted keywords for FTS + vector search\n",
    "        query_type=\"hybrid\",\n",
    "        fts_columns=[\"headings\", \"content\"],\n",
    "        vector_column_name=\"embedding\",\n",
    "    )\n",
    "    .rerank(reranker=reranker)\n",
    "    .limit(3)\n",
    "    .to_list()\n",
    ")\n",
    "\n",
    "formatted_strings = []\n",
    "for record in results:\n",
    "    # Retrieve values with default empty strings if keys are missing\n",
    "    title = record.get(\"title\", \"\")\n",
    "    headings = record.get(\"headings\", \"\")\n",
    "    content = record.get(\"content\", \"\")\n",
    "\n",
    "    # Format the string as \"Title - {title}, Headings - {headings}, Content - {content}\"\n",
    "    formatted_string = f\"Title - {title}, Headings - {headings}, Content - {content}\"\n",
    "\n",
    "    # Append the formatted string to the list\n",
    "    formatted_strings.append(formatted_string)\n",
    "\n",
    "# Combine all formatted strings into a single string with line breaks. We'll pass this as context to LLM for Script generation.\n",
    "context = \"\\n\\n\\n\".join(\n",
    "    f\"Reference : {i} -> {x}\" for i, x in enumerate(formatted_strings)\n",
    ")\n",
    "print(context)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {
    "id": "lbeZ1_mgP3Jq"
   },
   "outputs": [],
   "source": [
    "# using gemini to generate transcript\n",
    "script = generate_conversation(sample_blog, \"LanecDB\", speakers_list, context)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "collapsed": true,
    "id": "l4UXzzmIU8A-",
    "outputId": "c5ae5481-62aa-4be6-c787-75def183bb7d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'speaker': 'Shresth',\n",
      "  'text': \"Hey everyone, and welcome to the podcast! Today, we're diving deep \"\n",
      "          'into the world of Fabric deployments, particularly around '\n",
      "          'warehouses and those tricky views.'},\n",
      " {'speaker': 'Arjun',\n",
      "  'text': \"Yeah, deployments can be a real headache.  Especially when you're \"\n",
      "          'dealing with complex dependencies.'},\n",
      " {'speaker': 'Geet',\n",
      "  'text': \"Absolutely!  I've run into so many roadblocks. It's like navigating \"\n",
      "          'a maze blindfolded sometimes.'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': 'Tell me about it! In a recent article I wrote, I discussed some of '\n",
      "          'the common issues and solutions, building on a previous post where '\n",
      "          \"I tackled the dreaded 'DmsImportDatabaseException'.\"},\n",
      " {'speaker': 'Arjun',\n",
      "  'text': \"Oh, I remember that one. It's usually related to missing tables in \"\n",
      "          'the target lakehouse, right?'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': 'Exactly.  Fabric expects all dependencies to be deployed before the '\n",
      "          'main item. So, if your pipeline uses a notebook, deploy the '\n",
      "          \"notebook first. Use the 'Select related' option, it's a lifesaver.\"},\n",
      " {'speaker': 'Geet',\n",
      "  'text': \"And for Power BI reports, 'View Lineage' is your best friend. Helps \"\n",
      "          'you track down those hidden connections.'},\n",
      " {'speaker': 'Arjun',\n",
      "  'text': \"Good tips. But I've found that even with all dependencies in place, \"\n",
      "          'warehouse deployments can still fail.'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': \"Right. And I've recently discovered another culprit: views. \"\n",
      "          'Specifically, views that reference other views within the same '\n",
      "          \"warehouse. Fabric doesn't support this yet, leading to that same \"\n",
      "          \"'DmsImportDatabaseException' error.\"},\n",
      " {'speaker': 'Geet',\n",
      "  'text': \"Ugh, that sounds frustrating.  So, what's the workaround?\"},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': 'Well, the current solution is a bit of a manual process.  You have '\n",
      "          'to delete the view from the source workspace and recreate it in the '\n",
      "          'target after deployment.  Not ideal, I know.'},\n",
      " {'speaker': 'Arjun',\n",
      "  'text': \"Hmm, a bit of a pain. Hopefully, they'll fix that soon. It seems \"\n",
      "          'like a pretty fundamental feature.'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': \"I agree. I've highlighted this issue in my article, hoping to raise \"\n",
      "          'awareness and maybe nudge the Fabric team in the right direction.'},\n",
      " {'speaker': 'Geet',\n",
      "  'text': \"It's important to share these experiences.  It helps everyone in \"\n",
      "          'the community avoid similar pitfalls.'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': \"Absolutely. I've also written about other deployment challenges, \"\n",
      "          'like maintaining sanity between development, staging, and '\n",
      "          'production environments.  Things can get messy when you have to '\n",
      "          'make manual changes in staging.'},\n",
      " {'speaker': 'Arjun',\n",
      "  'text': 'Oh yeah, the classic dev-staging-prod synchronization problem.  '\n",
      "          \"It's so easy to lose track of changes.\"},\n",
      " {'speaker': 'Geet',\n",
      "  'text': \"I've been there.  Fixing a bug directly in staging and then \"\n",
      "          \"forgetting to update the development environment. It's a recipe for \"\n",
      "          'disaster.'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': \"And another tricky one I've encountered is with stored procedures \"\n",
      "          'in Data Factory pipelines.  When you change the warehouse '\n",
      "          \"connection, it doesn't always persist after deployment.  Super \"\n",
      "          'annoying.'},\n",
      " {'speaker': 'Arjun',\n",
      "  'text': 'So, you end up having to manually update the connection in the '\n",
      "          'stored procedure activity every time?'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': \"Exactly. It's a tedious workaround, and I'm hoping for a better \"\n",
      "          'solution soon.  But for now, these are the realities of working '\n",
      "          \"with Fabric deployments.  It has its quirks, but it's a powerful \"\n",
      "          'platform nonetheless.'},\n",
      " {'speaker': 'Geet',\n",
      "  'text': 'Definitely powerful.  And these kinds of discussions are crucial '\n",
      "          'for navigating the complexities and making the most of it.'},\n",
      " {'speaker': 'Arjun',\n",
      "  'text': 'Totally agree.  Sharing our experiences, workarounds, and '\n",
      "          'frustrations helps us all learn and improve. Thanks for bringing '\n",
      "          'these issues to light, Shresth.'},\n",
      " {'speaker': 'Shresth',\n",
      "  'text': 'My pleasure. Hopefully, these insights will save some of you from '\n",
      "          'pulling your hair out during your next Fabric deployment. And be '\n",
      "          'sure to check out my articles for more details and tips. Until next '\n",
      "          'time, happy deploying!'}]\n"
     ]
    }
   ],
   "source": [
    "pprint(script)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "XjWauUMgUb8r",
    "outputId": "61d5e5f5-f77f-4259-f807-8e41920fe075"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Conversation saved to ./conversation/conversation.json\n"
     ]
    }
   ],
   "source": [
    "# not mandatory but could be used as context for next blog if saved into another table. We can involve mutiple retrievers and infact use agents too :)\n",
    "save_conversation(script)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xcbKbXxxV8Ux",
    "outputId": "45bdcb24-e77f-4fac-bf0b-c66790a26fa9"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['audio-files/0_Shresth.mp3',\n",
       " 'audio-files/1_Arjun.mp3',\n",
       " 'audio-files/2_Geet.mp3',\n",
       " 'audio-files/3_Shresth.mp3',\n",
       " 'audio-files/4_Arjun.mp3',\n",
       " 'audio-files/5_Shresth.mp3',\n",
       " 'audio-files/6_Geet.mp3',\n",
       " 'audio-files/7_Arjun.mp3',\n",
       " 'audio-files/8_Shresth.mp3',\n",
       " 'audio-files/9_Geet.mp3',\n",
       " 'audio-files/10_Shresth.mp3',\n",
       " 'audio-files/11_Arjun.mp3',\n",
       " 'audio-files/12_Shresth.mp3',\n",
       " 'audio-files/13_Geet.mp3',\n",
       " 'audio-files/14_Shresth.mp3',\n",
       " 'audio-files/15_Arjun.mp3',\n",
       " 'audio-files/16_Geet.mp3',\n",
       " 'audio-files/17_Shresth.mp3',\n",
       " 'audio-files/18_Arjun.mp3',\n",
       " 'audio-files/19_Shresth.mp3',\n",
       " 'audio-files/20_Geet.mp3',\n",
       " 'audio-files/21_Arjun.mp3',\n",
       " 'audio-files/22_Shresth.mp3']"
      ]
     },
     "execution_count": 56,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# to generate audio using elevenlabs\n",
    "\n",
    "generate_audio(script, speakers_map)  # speakers_map is defined in config cell"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 35
    },
    "id": "r5k749aj9_e9",
    "outputId": "f060f079-eb27-4557-8558-eecdd4c11bd2"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.google.colaboratory.intrinsic+json": {
       "type": "string"
      },
      "text/plain": [
       "'podcast.mp3'"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output_file = \"podcast.mp3\"\n",
    "merge_audios(\"audio-files\", output_file)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "S8TQWfZJYM5o"
   },
   "source": [
    "And that's it. Listen to this podcast and Stay tuned for the next one.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "R2FlhJvEYdOX"
   },
   "source": [
    "![image.png]()"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
