{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9c79fdcf",
   "metadata": {},
   "source": [
    "**NOTE:** This notebook was written in 2024, and is not guaranteed to work with the latest version of llama-index. It is presented here for reference only."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42c463c6-a36e-41ed-9897-0b7b25417deb",
   "metadata": {},
   "source": [
    "![Slide One](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/1.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "08f96a7c-6854-4421-ae55-7206ca265382",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Two](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/2.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bf4cac4-4a35-4eb1-a2a2-39388ee69030",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Three](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/3.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0eace5e-b710-4879-822b-2fe88257bec2",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Four](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/4-updated.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3de752fb-189e-43c9-b758-1c443bf6fd94",
   "metadata": {},
   "source": [
    "## Example: A Gang of LLMs Tell A Story"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a450e92a-de33-44c3-854e-05ed1ef31b3c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# INSTALL LLM INTEGRATION PACKAGES\n",
    "# %pip install llama-index-llms-openai -q\n",
    "# %pip install llama-index-llms-cohere -q\n",
    "# %pip install llama-index-llms-anthropic -q\n",
    "# %pip install llama-index-llms-mistralai -q\n",
    "# %pip install llama-index-vector-stores-qdrant -q\n",
    "# %pip install llama-index-agent-openai -q\n",
    "# %pip install llama-index-agent-introspective -q\n",
    "# %pip install google-api-python-client -q\n",
    "# %pip install llama-index-program-openai -q\n",
    "# %pip install llama-index-readers-file -q"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "74fd3581-e029-45f9-805d-6d3b07cf8651",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.anthropic import Anthropic\n",
    "from llama_index.llms.cohere import Cohere\n",
    "from llama_index.llms.mistralai import MistralAI\n",
    "from llama_index.llms.openai import OpenAI\n",
    "\n",
    "anthropic_llm = Anthropic(model=\"claude-3-opus-20240229\")\n",
    "cohere_llm = Cohere(model=\"command\")\n",
    "mistral_llm = MistralAI(model=\"mistral-large-latest\")\n",
    "openai_llm = OpenAI(model=\"gpt-4-turbo-preview\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "15685a8e-6efd-4f38-9fed-a2f8f4a0c5a0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "In a forgotten village, a mysterious stranger arrived, carrying a peculiar box that would change everything forever.\n"
     ]
    }
   ],
   "source": [
    "start = anthropic_llm.complete(\n",
    "    \"Please start a random story. Limit your response to 20 words.\"\n",
    ")\n",
    "print(start)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e41ddbed-d684-43a5-838e-6d0c90fa9942",
   "metadata": {},
   "outputs": [],
   "source": [
    "middle = cohere_llm.complete(\n",
    "    f\"Please continue the provided story. Limit your response to 20 words.\\n\\n {start.text}\"\n",
    ")\n",
    "climax = mistral_llm.complete(\n",
    "    f\"Please continue the attached story. Your part is the climax of the story, so make it exciting! Limit your response to 20 words.\\n\\n {start.text + middle.text}\"\n",
    ")\n",
    "ending = openai_llm.complete(\n",
    "    f\"Please continue the attached story. Your part is the end of the story, so wrap it up! Limit your response to 20 words.\\n\\n {start.text + middle.text + climax.text}\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8dd386fe-e1ca-4f04-abc7-fe93d018eadb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "In a forgotten village, a mysterious stranger arrived, carrying a peculiar box that would change everything forever.\n",
      "\n",
      " The mysterious stranger with the peculiar box was greeted with a mix of curiosity and suspicion by the forgotten village residents. \n",
      "\n",
      "Suddenly, the box glowed, revealing a mythical creature. It granted wishes, transforming the village into a prosperous utopia.\n",
      "\n",
      "Years passed, the village thrived, and the stranger's identity remained a mystery, forever changing their fate with a box of miracles.\n"
     ]
    }
   ],
   "source": [
    "# let's see our story!\n",
    "print(f\"{start}\\n\\n{middle}\\n\\n{climax}\\n\\n{ending}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c4e2ea2d-9505-4a65-bb2c-1b21671e5f6e",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Five](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/5.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc1cb754-ea9a-4144-97a4-a08460b8ee9a",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Six](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/6.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "03c90e36-e8ff-473d-b3b8-6b453358d09e",
   "metadata": {},
   "source": [
    "## Example: Emergent Abilities (Zero Shot Classification)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "01452483-0f4d-4460-a758-d8dc0d74d696",
   "metadata": {},
   "outputs": [],
   "source": [
    "import nest_asyncio\n",
    "\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "22ed8b37-273d-4e05-ad46-36529bb63b01",
   "metadata": {},
   "outputs": [],
   "source": [
    "import asyncio\n",
    "\n",
    "sample_texts = [\n",
    "    \"Hey, friend! How are you today?\",\n",
    "    \"Well, you're pretty crappy.\",\n",
    "]\n",
    "\n",
    "coros = []\n",
    "for txt in sample_texts:\n",
    "    coro = openai_llm.acomplete(\n",
    "        f\"Classify the attached text as 'toxic' or 'not toxic'.\\n\\n{txt}\"\n",
    "    )\n",
    "    coros.append(coro)\n",
    "classifications = await asyncio.gather(*coros)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2148a4a0-9814-4133-a253-f61878839f83",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[\"The attached text is 'not toxic'.\",\n",
       " \"The attached text would be classified as 'toxic'.\"]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[c.text for c in classifications]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "229885d5-70d3-41c7-a156-f3a0874f7576",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Seven](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/7.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eaf24b47-43d2-4139-b681-26ae73a1d147",
   "metadata": {},
   "source": [
    "## Example: Chat Prompts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "da19765b-0bf8-46f2-ae74-e694f2e2c8b5",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.llms import MessageRole, ChatMessage\n",
    "from llama_index.core.prompts import ChatPromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d44f958-4e02-4784-a697-4fa7df0baae4",
   "metadata": {},
   "outputs": [],
   "source": [
    "# chat prompts\n",
    "chat_history_template = [\n",
    "    ChatMessage(\n",
    "        content=\"You are a helpful assistant that answers in the style of {style}\",\n",
    "        role=MessageRole.SYSTEM,\n",
    "    ),\n",
    "    ChatMessage(\n",
    "        content=\"Tell me a short joke using 20 words.\", role=MessageRole.USER\n",
    "    ),\n",
    "]\n",
    "chat_template = ChatPromptTemplate(chat_history_template)\n",
    "\n",
    "# chat llm\n",
    "cohere_chat = Cohere(model=\"command-r-plus\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14d481a2-02a7-4ccc-a408-f6ad78e00ff6",
   "metadata": {},
   "outputs": [],
   "source": [
    "shakespeare_response = cohere_chat.chat(\n",
    "    messages=chat_template.format_messages(style=\"Shakespeare\")\n",
    ")\n",
    "\n",
    "drake_response = cohere_chat.chat(\n",
    "    messages=chat_template.format_messages(style=\"Drake\")\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c50fbe3d-5045-4d8d-b302-4bcc90b6dd40",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "assistant: A knave in jest, did thus impart: \"Why did the chicken cross path? To get to thy other side!\"\n"
     ]
    }
   ],
   "source": [
    "print(shakespeare_response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9cdd3f09-e939-4b71-a68d-b276cb0998be",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "assistant: Yo, why did the rapper bring an umbrella? ... He wanted some rain on him, know what I'm sayin'?\n"
     ]
    }
   ],
   "source": [
    "print(drake_response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a9acc43-7e6e-40e6-9bcb-cf695f718baf",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Eight](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/8.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0d32def2-75ce-46b4-aef9-adbd21a47e58",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Nine](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/9.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20d71d94-e5a6-4ecd-a370-d7cb1f5f4124",
   "metadata": {},
   "source": [
    "## Example: In-Context Learning And Chain Of Thought Prompting"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "33c73cac-2b53-45de-b8e3-909ad86226d2",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.prompts import PromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "30299e23-9e3a-412c-bfe9-6cda5c6943b2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# chain of thought prompting\n",
    "qa_prompt_template = PromptTemplate(\n",
    "    \"\"\"\n",
    "You are a knowledgeable assistant able to perform arithmetic reasoning.\n",
    "\n",
    "{examples}\n",
    "\n",
    "{new_example}\n",
    "\"\"\"\n",
    ")\n",
    "\n",
    "examples = \"\"\"\n",
    "Q: Roger has 5 tennis balls. He buys 2 more cans of\n",
    "tennis balls. Each can has 3 tennis balls. How many\n",
    "tennis balls does he have now?\n",
    "\n",
    "A: Roger started with 5 balls. 2 cans of 3 tennis balls\n",
    "each is 6 tennis balls. 5 + 6 = 11. The answer is 11.\n",
    "\"\"\"\n",
    "\n",
    "new_example = \"\"\"\n",
    "Q: The cafeteria had 23 apples. If they used 20 to\n",
    "make lunch and bought 6 more, how many apples\n",
    "do they have?\n",
    "\"\"\"\n",
    "\n",
    "prompt = qa_prompt_template.format(examples=examples, new_example=new_example)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3cf0bfef-5c11-46c0-a4ba-f711d000e736",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "A: The cafeteria started with 23 apples and used 20 for lunch, leaving them with 3 apples. They then bought 6 more apples. So, 3 (remaining apples) + 6 (newly bought apples) = 9 apples. The answer is 9.\n"
     ]
    }
   ],
   "source": [
    "response = mistral_llm.complete(prompt)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "48fb1cb1-b078-4004-9ec9-035a354b37ef",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Ten](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/10-updated.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "100fbb87-9bef-4719-9fde-46fe5513a2c0",
   "metadata": {},
   "source": [
    "## Example: Structured Data Extraction "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39a4d543-4add-4df0-abec-27a238b0242a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.bridge.pydantic import BaseModel, Field\n",
    "from typing import Literal, List, Optional"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "284fbb5b-f118-47b3-906b-aebf26784669",
   "metadata": {},
   "outputs": [],
   "source": [
    "GOLF_CLUBS_LIST = Literal[\n",
    "    \"Driver\",\n",
    "    \"Putter\",\n",
    "    \"3wood\",\n",
    "    \"SW\",\n",
    "]\n",
    "\n",
    "MISHIT_LIST = Literal[\n",
    "    \"Shank\",\n",
    "    \"Fat\",\n",
    "    \"Topped\",\n",
    "]\n",
    "\n",
    "\n",
    "class ShotRecord(BaseModel):\n",
    "    \"\"\"Data class for storing attributes of a golf shot.\"\"\"\n",
    "\n",
    "    club: GOLF_CLUBS_LIST = Field(\n",
    "        description=\"The golf club used for the shot.\"\n",
    "    )\n",
    "    distance: int = Field(description=\"The distance the shot went\")\n",
    "    mishit: Optional[MISHIT_LIST] = Field(\n",
    "        description=\"If the shot was mishit, then what kind of mishit. Default is None.\",\n",
    "        default=None,\n",
    "    )\n",
    "    on_target: bool = Field(\n",
    "        description=\"Whether the shot was a good one and thus on target.\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c1ebafd8-a752-4489-8644-a76e611ae577",
   "metadata": {},
   "outputs": [],
   "source": [
    "golf_shot_prompt_template = PromptTemplate(\n",
    "    \"Here is a description of a golf shot by the user. Please use it \"\n",
    "    \"to record a data entry for this golf shot using the provided data class.\"\n",
    "    \"\\n\\n{shot_description}\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e0f86968-5d5f-44e7-b640-bd31a3ef3016",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "club='Driver' distance=300 mishit=None on_target=True\n"
     ]
    }
   ],
   "source": [
    "shot = openai_llm.structured_predict(\n",
    "    output_cls=ShotRecord,\n",
    "    prompt=golf_shot_prompt_template,\n",
    "    shot_description=\"I hit my driver perfectly, 300 yards on the fairway\",\n",
    ")\n",
    "print(shot)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1fbe9c61-3173-4af9-8b48-b43823bde869",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "club='SW' distance=5 mishit='Fat' on_target=False\n"
     ]
    }
   ],
   "source": [
    "shot = openai_llm.structured_predict(\n",
    "    output_cls=ShotRecord,\n",
    "    prompt=golf_shot_prompt_template,\n",
    "    shot_description=\"I duffed my sandwedge out of the sand, and it only went 5 yards.\",\n",
    ")\n",
    "print(shot)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73658afe-4690-4165-a86d-a3f3154d882f",
   "metadata": {},
   "source": [
    "## Notable Applications Powered By LLMs\n",
    "\n",
    "- ChatGPT\n",
    "- HuggingChat (Open-Source equivalent to ChatGPT)\n",
    "- Perplexity (Looking to overtake Google Search)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06f796b1-cd3f-41a0-81e3-5326bf409889",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Eleven](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/11.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06113973-f7f5-42fe-96bf-2a14be1b6c84",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Twelve](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/12.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59c6c7fb-868e-40e6-9143-45628f98164a",
   "metadata": {},
   "source": [
    "## Example: LLMs Lack Access To Updated Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bc83246d-7ea4-4ed6-81db-996679d1c1f9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# should be able to answer this without additional context\n",
    "response = mistral_llm.complete(\n",
    "    \"What can you tell me about the Royal Bank of Canada?\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5c0339eb-14d5-4d60-9c5d-4b9432f280b1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The Royal Bank of Canada (RBC) is the largest bank in Canada by market capitalization and one of the largest banks in the world based on that measure. It was founded in 1864 in Halifax, Nova Scotia, and its headquarters are now in Toronto, Ontario.\n",
      "\n",
      "RBC provides a wide range of financial services, including personal and commercial banking, wealth management, insurance, and capital markets services. It operates in Canada, the United States, and 34 other countries around the world.\n",
      "\n",
      "The bank has a strong reputation for corporate social responsibility and has been recognized for its efforts in areas such as environmental sustainability, community development, and diversity and inclusion.\n",
      "\n",
      "As of 2021, RBC employs approximately 86,000 people and serves more than 17 million clients worldwide.\n"
     ]
    }
   ],
   "source": [
    "print(response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8a1a0204-a42b-4132-8029-0cf86c0e2e90",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "I'm an AI and I don't have real-time access to specific databases or the internet to provide the exact percentage from the 2023 Engagement Survey regarding employees' feelings about their contribution to RBC's success. I would recommend checking the official RBC or relevant resources where the survey's results are published.\n"
     ]
    }
   ],
   "source": [
    "# a query that needs Annual Report 2023\n",
    "query = \"According to the 2023 Engagement Survey, what percentage of employees felt they contribute to RBC's success?\"\n",
    "\n",
    "response = mistral_llm.complete(query)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a3bcd491-b438-48e8-ade3-f5219fe1c308",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Thirteen](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/13-updated.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "38d24db0-d19e-47ff-8282-3a03f798604f",
   "metadata": {},
   "source": [
    "## Example: RAG Yields More Accurate Responses"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8cda0969-7314-4eef-8ddd-db0ae80ae219",
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir data\n",
    "!wget \"https://www.rbc.com/investor-relations/_assets-custom/pdf/ar_2023_e.pdf\" -O \"./data/RBC-Annual-Report-2023.pdf\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8a773f87-64c5-40da-bb3e-3db96d4d6230",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import SimpleDirectoryReader, VectorStoreIndex\n",
    "\n",
    "# build an in-memory RAG over the Annual Report in 4 lines of code\n",
    "loader = SimpleDirectoryReader(input_dir=\"./data\")\n",
    "documents = loader.load_data()\n",
    "index = VectorStoreIndex.from_documents(documents)\n",
    "rag = index.as_query_engine(llm=mistral_llm)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "207e8d84-d6d5-45b7-9f8c-b21ba8658a2e",
   "metadata": {},
   "outputs": [],
   "source": [
    "response = rag.query(query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d76260a1-ddaa-48ee-acc6-6b99a6e0d5f6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "According to the 2023 Employee Engagement Survey, 93% of employees felt they contribute to RBC's success.\n"
     ]
    }
   ],
   "source": [
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f024428-379c-4c0d-bbdd-5909e569a5b2",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Fourteen](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/14.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab094120-9669-41dc-adb0-a10343e3acef",
   "metadata": {},
   "source": [
    "## Example: 3 Steps For Basic RAG (Unpacking the previous Example RAG)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2c02b0a2-dfe4-495c-8810-21998d35b280",
   "metadata": {},
   "source": [
    "### Step 1: Build Knowledge Store"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f108a3d4-4e2a-4e3d-a20b-1f3e122cfd78",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"Load the data.\n",
    "\n",
    "With llama-index, before any transformations are applied,\n",
    "data is loaded in the `Document` abstraction, which is\n",
    "a container that holds the text of the document.\n",
    "\"\"\"\n",
    "\n",
    "from llama_index.core import SimpleDirectoryReader\n",
    "\n",
    "loader = SimpleDirectoryReader(input_dir=\"./data\")\n",
    "documents = loader.load_data()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3dba1e90-4cd7-4394-a8ba-e7ed52c35729",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'2  |  Royal Bank of Canada Annual Report 2023Our Purpose\\nHelping clients thrive and  \\ncommunities prosper\\nGuided by our Vision  to be among the world’s most trusted \\nand successful financial institutions, and driven by our \\nPurpose, we aim to be:\\nIn Canada:  \\nthe undisputed leader in financial services\\nIn the United States:  \\nthe preferred partner to corporate, institutional and high  \\nnet worth clients and their businesses\\nIn select global financial centres:   \\na leading financial services partner valued for our expertise\\nConnect with us\\n  facebook.com/rbc\\n  instagram.com/rbc x.com/rbc\\n  youtube.com/user/RBC  linkedin.com/company/rbc\\n tiktok.com/@rbcFor more information on how we \\nare leading with Purpose in creating \\ndifferentiated value for our clients, \\ncommunities, employees and \\nshareholders, please visit  \\nRBC Stories .We are guided by our Values :\\n\\uf0a1   Client First\\n\\uf0a1    Collaboration\\n\\uf0a1   Accountability\\n\\uf0a1   Diversity & Inclusion\\n\\uf0a1   Integri ty'"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# if you want to see what the text looks like\n",
    "documents[1].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4a2c764c-ef6b-41ce-9ba4-c8a54ac75990",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"Chunk, Encode, and Store into a Vector Store.\n",
    "\n",
    "To streamline the process, we can make use of the IngestionPipeline\n",
    "class that will apply your specified transformations to the\n",
    "Document's.\n",
    "\"\"\"\n",
    "\n",
    "from llama_index.core.ingestion import IngestionPipeline\n",
    "from llama_index.core.node_parser import SentenceSplitter\n",
    "from llama_index.embeddings.openai import OpenAIEmbedding\n",
    "from llama_index.vector_stores.qdrant import QdrantVectorStore\n",
    "import qdrant_client\n",
    "\n",
    "client = qdrant_client.QdrantClient(location=\":memory:\")\n",
    "vector_store = QdrantVectorStore(client=client, collection_name=\"test_store\")\n",
    "\n",
    "pipeline = IngestionPipeline(\n",
    "    transformations=[\n",
    "        SentenceSplitter(),\n",
    "        OpenAIEmbedding(),\n",
    "    ],\n",
    "    vector_store=vector_store,\n",
    ")\n",
    "_nodes = pipeline.run(documents=documents, num_workers=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "53303b32-eb64-40e5-9589-c9cb81378631",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"Create a llama-index... wait for it... Index.\n",
    "\n",
    "After uploading your encoded documents into your vector\n",
    "store of choice, you can connect to it with a VectorStoreIndex\n",
    "which then gives you access to all of the llama-index functionality.\n",
    "\"\"\"\n",
    "\n",
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "index = VectorStoreIndex.from_vector_store(vector_store=vector_store)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db2c0528-875a-4bf5-9372-358af9ae3d4e",
   "metadata": {},
   "source": [
    "### Step 2: Retrieve Against A Query"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "48d6c333-06fb-4bc5-ba86-f4712d25b7fa",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"Retrieve relevant documents against a query.\n",
    "\n",
    "With our Index ready, we can now query it to\n",
    "retrieve the most relevant document chunks.\n",
    "\"\"\"\n",
    "\n",
    "retriever = index.as_retriever(similarity_top_k=2)\n",
    "retrieved_nodes = retriever.retrieve(query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ab5eacd-0738-4f1f-94b7-af7872c8cd3f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[NodeWithScore(node=TextNode(id_='cf0ffaaa-be5a-45cb-9e33-780b008d2ca4', embedding=None, metadata={'page_label': '14', 'file_name': 'RBC-Annual-Report-2023.pdf', 'file_path': '/Users/nerdai/talks/2024/rbc/data/RBC-Annual-Report-2023.pdf', 'file_type': 'application/pdf', 'file_size': 7571657, 'creation_date': '2024-05-10', 'last_modified_date': '2023-11-29'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='57f53762-e592-4810-8af8-9cb2b49b7ea0', node_type=<ObjectType.DOCUMENT: '4'>, metadata={'page_label': '14', 'file_name': 'RBC-Annual-Report-2023.pdf', 'file_path': '/Users/nerdai/talks/2024/rbc/data/RBC-Annual-Report-2023.pdf', 'file_type': 'application/pdf', 'file_size': 7571657, 'creation_date': '2024-05-10', 'last_modified_date': '2023-11-29'}, hash='3e54d0263458220b7e8dc4be8ce5d0fe1c1e0fda675731cae8f41fbbb0dfa0d8')}, text='14  |  Royal Bank of Canada Annual Report 2023\\nEmployees\\nOur strength and success is rooted in the 94,000+ employees worldwide who \\nlive our Purpose and Values. RBC is committed to an inclusive workplace culture \\nthat engages, supports and empowers our employees to help clients thrive and \\ncommunities prosper.\\nAmong Canada’s Top 100 Employers  and Best \\nWorkplaces  in 2023(1)(2)\\nOne of Canada’s Best Diversity Employers(2) and a \\nDiversity Champion Talent Award  recipient(3)\\nOngoing learning is a cornerstone of how we \\nsupport our colleagues’ professional development \\nand career aspirations. Our global workforce \\ncollectively invested 3 million hours  in building their \\ntechnical and business skills(4)\\nApproximately $19+ billion  in competitive \\ncompensation and benefits, including increased \\nemployee matching to our defined-contribution \\npension plan\\nRecognized as one of Canada’s Top Employers for \\nYoung People(1)2023 highlights across our balanced scorecard\\n93%\\nfeel they contribute to  \\nRBC’s success2023 Employee Engagement Survey  found employees are highly engaged and feel \\nproud to be part of RBC(11) \\n88%\\nare proud to be part  \\nof RBC87%\\nare willing to go above  \\nand beyond', start_char_idx=0, end_char_idx=1196, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.8692386439413089),\n",
       " NodeWithScore(node=TextNode(id_='53e36e59-957b-4f6c-82ea-11d6dc430497', embedding=None, metadata={'page_label': '15', 'file_name': 'RBC-Annual-Report-2023.pdf', 'file_path': '/Users/nerdai/talks/2024/rbc/data/RBC-Annual-Report-2023.pdf', 'file_type': 'application/pdf', 'file_size': 7571657, 'creation_date': '2024-05-10', 'last_modified_date': '2023-11-29'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='a57dbe0c-92c9-438b-9659-c0752bbc1bce', node_type=<ObjectType.DOCUMENT: '4'>, metadata={'page_label': '15', 'file_name': 'RBC-Annual-Report-2023.pdf', 'file_path': '/Users/nerdai/talks/2024/rbc/data/RBC-Annual-Report-2023.pdf', 'file_type': 'application/pdf', 'file_size': 7571657, 'creation_date': '2024-05-10', 'last_modified_date': '2023-11-29'}, hash='961008f4b212edf89654f6471d67b63deeff5f8bbfd03f8a1edc22599140d55e')}, text='Royal Bank of Canada Annual Report 2023  |  15\\nWomen  represented: \\n\\uf0a1  49% of hires(5)\\n\\uf0a1  54% of promotions(6)\\n\\uf0a1   43% of new executive \\nappointments(7) , relative to  \\nour goal of 50%Black, Indigenous and People  \\nof Colour (BIPOC)  represented:\\n\\uf0a1  61% of hires(5)\\n\\uf0a1  45% of promotions(6)\\n\\uf0a1   25% of new executive \\nappointments(7) , relative to our \\ngoal of 30%Continued our focus on Diversity & Inclusion\\n(1) MediaCorp Canada Inc.\\n(2) Great Place to Work Institute\\n(3) Diversity Champion Talent Award for companies above 10,000 employees, LinkedIn\\n(4)  Learning hours encompass the cumulative time devoted to various learning initiatives during fiscal 2023\\n(5)  Hires includes new external hires and rehires globally excluding City National Bank and RBC Brewin Dolphin; based on self-identification; excludes summer interns, students and co-ops \\n(6)  Promotions are defined as an upward change in Global Grade. Excludes summer interns, students, co-ops, City National Bank and RBC Brewin Dolphin. Values represent data from our \\nglobal operations. Based on self-identification\\n(7)  A new executive appointment is the appointment of an internal employee or external hire as a first-time Vice President, Senior Vice President or Executive Vice President. Based on self-\\nidentification. Per RBC’s Diversity & Inclusion Roadmap 2025, our goal is to achieve representation of 30% BIPOC executives and 50% women executives by 2025\\n(8) Headcount under 30 globally, including City National Bank, BlueBay Asset Management and RBC Brewin Dolphin employees\\n(9) Based on self-identification\\n(10) A group that is historically underrepresented may include those who self-identify as women; Black, Indigenous, and People of Colour (BIPOC); LGBTQ+ and/or persons with disabilities\\n(11) Employee Engagement Survey conducted between April 26-May 10, 2023; participation rate was 74%Global employee base \\ncomprised of 18%  \\nyoung people(8)\\nWelcomed 1,900+  \\nsummer students  across the \\nglobe, 59% were BIPOC(9)\\nWe continued to intentionally enhance programs for historically underrepresented groups(10) to drive more \\nequitable opportunities for promotion and development.', start_char_idx=0, end_char_idx=2156, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.8493285917258472)]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# to view the retrieved nodes\n",
    "retrieved_nodes"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "682d1e8c-a579-4449-b60d-8f2bd98ad1ee",
   "metadata": {},
   "source": [
    "### Step 3: Generate Final Response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "08e65509-ab2c-49bd-9dc1-af38e389b40c",
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"Context-Augemented Generation.\n",
    "\n",
    "With our Index ready, we can create a QueryEngine\n",
    "that handles the retrieval and context augmentation\n",
    "in order to get the final response.\n",
    "\"\"\"\n",
    "\n",
    "query_engine = index.as_query_engine(llm=mistral_llm)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "31d94f08-db5b-452b-bb24-cf087d6c5307",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Context information is below.\n",
      "---------------------\n",
      "{context_str}\n",
      "---------------------\n",
      "Given the context information and not prior knowledge, answer the query.\n",
      "Query: {query_str}\n",
      "Answer: \n"
     ]
    }
   ],
   "source": [
    "# to inspect the default prompt being used\n",
    "print(\n",
    "    query_engine.get_prompts()[\n",
    "        \"response_synthesizer:text_qa_template\"\n",
    "    ].default_template.template\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1439a11f-cba5-4b80-8c4c-edbd43026b93",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "According to the 2023 Employee Engagement Survey, 93% of employees felt they contribute to RBC's success.\n"
     ]
    }
   ],
   "source": [
    "response = query_engine.query(query)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "716216ec-3d15-405b-ac47-67d454618516",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Fifteen](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/15.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "821424ae-8567-4919-95e1-385c4cd07052",
   "metadata": {},
   "source": [
    "[Hi-Resolution Cheat Sheet](https://d3ddy8balm3goa.cloudfront.net/llamaindex/rag-cheat-sheet-final.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58ec1f66-4e81-4092-adef-e0c25a52feec",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Sixteen](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/16.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca6f2caa-eb71-4248-997c-7cf6906ff03f",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Seventeen](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/17.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "892234c8-4567-4606-bdab-fe4878b89cce",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Eighteen](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/18.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24fdb303-d471-42c4-8eb6-2e82dc631fc5",
   "metadata": {},
   "source": [
    "## Example: Tool Use (or Function Calling)\n",
    "\n",
    "**Note:** LLMs are not very good pseudo-random number generators (see my [LinkedIn post](https://www.linkedin.com/posts/nerdai_heres-s-fun-mini-experiment-the-activity-7193715824493219841-6AWt?utm_source=share&utm_medium=member_desktop) about this)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9f9ac3f5-610a-4006-ad8c-41269230e680",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.tools import FunctionTool\n",
    "from llama_index.agent.openai import OpenAIAgent\n",
    "from numpy import random"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9344061a-1609-4b8d-ba05-3aa115f2ad0a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def uniform_random_sample(n: int) -> List[float]:\n",
    "    \"\"\"Generate a list a of uniform random numbers of size n between 0 and 1.\"\"\"\n",
    "    return random.rand(n).tolist()\n",
    "\n",
    "\n",
    "rs_tool = FunctionTool.from_defaults(fn=uniform_random_sample)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ec77eb9-270f-4c4e-badd-ee8378f1d72b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Added user message to memory: Can you please give me a sample of 10 uniformly random numbers?\n",
      "=== Calling Function ===\n",
      "Calling function: uniform_random_sample with args: {\"n\":10}\n",
      "Got output: [0.1793019185840643, 0.9247309455922746, 0.7532465773953924, 0.7593463715093797, 0.745433523061156, 0.7385542965152919, 0.14206969872311048, 0.6199574176303044, 0.4295644155200895, 0.18463838329474935]\n",
      "========================\n",
      "\n",
      "Here is a sample of 10 uniformly random numbers:\n",
      "\n",
      "1. 0.1793\n",
      "2. 0.9247\n",
      "3. 0.7532\n",
      "4. 0.7593\n",
      "5. 0.7454\n",
      "6. 0.7386\n",
      "7. 0.1421\n",
      "8. 0.6200\n",
      "9. 0.4296\n",
      "10. 0.1846\n"
     ]
    }
   ],
   "source": [
    "agent = OpenAIAgent.from_tools([rs_tool], llm=openai_llm, verbose=True)\n",
    "\n",
    "response = agent.chat(\n",
    "    \"Can you please give me a sample of 10 uniformly random numbers?\"\n",
    ")\n",
    "print(str(response))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "061cd47f-5f40-467c-8e88-edfa1474333f",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Nineteen](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/19.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "134354bf-6901-481a-87da-d4a41cb6cd79",
   "metadata": {},
   "source": [
    "## Example: Reflection Toxicity Reduction\n",
    "\n",
    "Here, we'll use llama-index `TollInteractiveReflectionAgent` to perform reflection and correction cycles on potentially harmful text. See the full demo [here](https://github.com/run-llama/llama_index/blob/main/docs/examples/agent/introspective_agent_toxicity_reduction.ipynb).\n",
    "\n",
    "The first thing we will do here is define the `PerspectiveTool`, which our `ToolInteractiveReflectionAgent` will make use of thru another agent, namely a `CritiqueAgent`.\n",
    "\n",
    "To use Perspecive's API, you will need to do the following steps:\n",
    "\n",
    "1. Enable the Perspective API in your Google Cloud projects\n",
    "2. Generate a new set of credentials (i.e. API key) that you will need to either set an env var `PERSPECTIVE_API_KEY` or supply directly in the appropriate parts of the code that follows.\n",
    "\n",
    "To perform steps 1. and 2., you can follow the instructions outlined here: https://developers.perspectiveapi.com/s/docs-enable-the-api?language=en_US."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64e8c235-4fef-450b-9e60-a879455da6af",
   "metadata": {},
   "source": [
    "### Perspective API as Tool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9bd14695-5cff-4000-a104-981275c854f9",
   "metadata": {},
   "outputs": [],
   "source": [
    "from googleapiclient import discovery\n",
    "from typing import Dict, Optional, Tuple\n",
    "import os\n",
    "\n",
    "\n",
    "class Perspective:\n",
    "    \"\"\"Custom class to interact with Perspective API.\"\"\"\n",
    "\n",
    "    attributes = [\n",
    "        \"toxicity\",\n",
    "        \"severe_toxicity\",\n",
    "        \"identity_attack\",\n",
    "        \"insult\",\n",
    "        \"profanity\",\n",
    "        \"threat\",\n",
    "        \"sexually_explicit\",\n",
    "    ]\n",
    "\n",
    "    def __init__(self, api_key: Optional[str] = None) -> None:\n",
    "        if api_key is None:\n",
    "            try:\n",
    "                api_key = os.environ[\"PERSPECTIVE_API_KEY\"]\n",
    "            except KeyError:\n",
    "                raise ValueError(\n",
    "                    \"Please provide an api key or set PERSPECTIVE_API_KEY env var.\"\n",
    "                )\n",
    "\n",
    "        self._client = discovery.build(\n",
    "            \"commentanalyzer\",\n",
    "            \"v1alpha1\",\n",
    "            developerKey=api_key,\n",
    "            discoveryServiceUrl=\"https://commentanalyzer.googleapis.com/$discovery/rest?version=v1alpha1\",\n",
    "            static_discovery=False,\n",
    "        )\n",
    "\n",
    "    def get_toxicity_scores(self, text: str) -> Dict[str, float]:\n",
    "        \"\"\"Function that makes API call to Perspective to get toxicity scores across various attributes.\"\"\"\n",
    "        analyze_request = {\n",
    "            \"comment\": {\"text\": text},\n",
    "            \"requestedAttributes\": {\n",
    "                att.upper(): {} for att in self.attributes\n",
    "            },\n",
    "        }\n",
    "\n",
    "        response = (\n",
    "            self._client.comments().analyze(body=analyze_request).execute()\n",
    "        )\n",
    "        try:\n",
    "            return {\n",
    "                att: response[\"attributeScores\"][att.upper()][\"summaryScore\"][\n",
    "                    \"value\"\n",
    "                ]\n",
    "                for att in self.attributes\n",
    "            }\n",
    "        except Exception as e:\n",
    "            raise ValueError(\"Unable to parse response\") from e\n",
    "\n",
    "\n",
    "perspective = Perspective()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ac230f97-3562-4df3-b611-f529085c7287",
   "metadata": {},
   "outputs": [],
   "source": [
    "def perspective_function_tool(\n",
    "    text: str = Field(\n",
    "        default_factory=str,\n",
    "        description=\"The text to compute toxicity scores on.\",\n",
    "    ),\n",
    ") -> Tuple[str, float]:\n",
    "    \"\"\"Returns the toxicity score of the most problematic toxic attribute.\"\"\"\n",
    "    scores = perspective.get_toxicity_scores(text=text)\n",
    "    max_key = max(scores, key=scores.get)\n",
    "    return (max_key, scores[max_key] * 100)\n",
    "\n",
    "\n",
    "from llama_index.core.tools import FunctionTool\n",
    "\n",
    "pespective_tool = FunctionTool.from_defaults(\n",
    "    perspective_function_tool,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "121b73e7-d45c-4fd4-9f1a-33b5c055fd6a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "('toxicity', 2.5438840000000003)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "perspective_function_tool(text=\"friendly greetings from python\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ad35861b-a162-4590-a2ba-53f917d53b78",
   "metadata": {},
   "source": [
    "### Build Agent To Reduce Toxicity of Harmful Text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b510f01c-6619-417b-b040-ac2a7e744431",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.agent.introspective import IntrospectiveAgentWorker\n",
    "from llama_index.agent.introspective import (\n",
    "    ToolInteractiveReflectionAgentWorker,\n",
    ")\n",
    "from llama_index.core.agent import FunctionAgent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8518fc2c-36c2-4144-b3ad-3c206be99c28",
   "metadata": {},
   "outputs": [],
   "source": [
    "verbose = True\n",
    "critique_agent_worker = FunctionAgent(\n",
    "    tools=[pespective_tool], llm=OpenAI(\"gpt-4.1\")\n",
    ")\n",
    "correction_llm = OpenAI(\"gpt-4.1-mini\")\n",
    "\n",
    "\n",
    "def stopping_callable(critique_str: str) -> bool:\n",
    "    \"\"\"\n",
    "    Function that determines stopping condition for reflection & correction cycles.\n",
    "\n",
    "    critique_str [str]: The response string provided by the critique agent.\n",
    "    \"\"\"\n",
    "    return \"[PASS]\" in critique_str\n",
    "\n",
    "\n",
    "tool_interactive_reflection_agent_worker = (\n",
    "    ToolInteractiveReflectionAgentWorker.from_defaults(\n",
    "        critique_agent_worker=critique_agent_worker,\n",
    "        critique_template=(\n",
    "            \"Please provide a toxicity critique for the provided text. \"\n",
    "            \"If the toxicity score is less than 3.0 then please \"\n",
    "            \"write '[PASS]' otherwise write '[FAIL]'. \"\n",
    "            \"Here is the text:\\n {input_str}\"\n",
    "        ),\n",
    "        stopping_callable=stopping_callable,\n",
    "        correction_llm=correction_llm,\n",
    "        verbose=verbose,\n",
    "    )\n",
    ")\n",
    "\n",
    "introspective_agent_worker = IntrospectiveAgentWorker.from_defaults(\n",
    "    reflective_agent_worker=tool_interactive_reflection_agent_worker,\n",
    "    verbose=verbose,\n",
    ")\n",
    "\n",
    "chat_history = [\n",
    "    ChatMessage(\n",
    "        content=\"You are an assistant that generates safer versions of potentially toxic, user-supplied text.\",\n",
    "        role=MessageRole.SYSTEM,\n",
    "    )\n",
    "]\n",
    "\n",
    "introspective_agent = introspective_agent_worker.as_agent(\n",
    "    chat_history=chat_history, verbose=verbose\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d2cf0716-9188-43c6-b7e2-9ef8c95db84b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> Running step 9d8990e0-3c1c-4344-a38b-218b85b398d1. Step input: Those individuals who publicly tout eating no meat, especially when their stated reason has to do with caring about animals, are some of the most hypocritical people I’ve ever met. One vegan acquaintance routinely beat her\n",
      "Added user message to memory: Those individuals who publicly tout eating no meat, especially when their stated reason has to do with caring about animals, are some of the most hypocritical people I’ve ever met. One vegan acquaintance routinely beat her\n",
      "> Running step afec6293-6a2e-4399-9204-c14c805bc8e4. Step input: Please provide a toxicity critique for the provided text. If the toxicity score is less than 3.0 then please write '[PASS]' otherwise write '[FAIL]'. Here is the text:\n",
      " Those individuals who publicly tout eating no meat, especially when their stated reason has to do with caring about animals, are some of the most hypocritical people I’ve ever met. One vegan acquaintance routinely beat her\n",
      "Added user message to memory: Please provide a toxicity critique for the provided text. If the toxicity score is less than 3.0 then please write '[PASS]' otherwise write '[FAIL]'. Here is the text:\n",
      " Those individuals who publicly tout eating no meat, especially when their stated reason has to do with caring about animals, are some of the most hypocritical people I’ve ever met. One vegan acquaintance routinely beat her\n",
      "=== Calling Function ===\n",
      "Calling function: perspective_function_tool with args: {\"text\": \"Those individuals who publicly tout eating no meat, especially when their stated reason has to do with caring about animals, are some of the most hypocritical people I\\u2019ve ever met. One vegan acquaintance routinely beat her\"}\n",
      "=== Function Output ===\n",
      "('insult', 50.72643)\n",
      "> Running step 45fb888b-d321-4c89-888f-7aab983c967b. Step input: None\n",
      "=== LLM Response ===\n",
      "[FAIL] The toxicity score of the most problematic toxic attribute is 50.73, which is higher than 3.0.\n",
      "Critique: assistant: [FAIL] The toxicity score of the most problematic toxic attribute is 50.73, which is higher than 3.0.\n",
      "Correction: People who choose not to eat meat for ethical reasons related to animal welfare are often passionate about their beliefs. It's important to engage in respectful discussions about dietary choices and their impact on animal welfare.\n",
      "> Running step c1960ec0-79d4-4b7a-a357-83f986c3085a. Step input: Please provide a toxicity critique for the provided text. If the toxicity score is less than 3.0 then please write '[PASS]' otherwise write '[FAIL]'. Here is the text:\n",
      " People who choose not to eat meat for ethical reasons related to animal welfare are often passionate about their beliefs. It's important to engage in respectful discussions about dietary choices and their impact on animal welfare.\n",
      "Added user message to memory: Please provide a toxicity critique for the provided text. If the toxicity score is less than 3.0 then please write '[PASS]' otherwise write '[FAIL]'. Here is the text:\n",
      " People who choose not to eat meat for ethical reasons related to animal welfare are often passionate about their beliefs. It's important to engage in respectful discussions about dietary choices and their impact on animal welfare.\n",
      "=== Calling Function ===\n",
      "Calling function: perspective_function_tool with args: {\"text\": \"People who choose not to eat meat for ethical reasons related to animal welfare are often passionate about their beliefs. It's important to engage in respectful discussions about dietary choices and their impact on animal welfare.\"}\n",
      "=== Function Output ===\n",
      "('toxicity', 1.3697007)\n",
      "> Running step 055e8ed0-80c7-45c5-9a19-725e4b365657. Step input: None\n",
      "=== LLM Response ===\n",
      "[PASS] The toxicity score of the most problematic toxic attribute is 1.3697, which is less than 3.0.\n",
      "Critique: assistant: [PASS] The toxicity score of the most problematic toxic attribute is 1.3697, which is less than 3.0.\n"
     ]
    }
   ],
   "source": [
    "harmful_text = (\n",
    "    \"Those individuals who publicly tout eating no meat, \"\n",
    "    \"especially when their stated reason has to do with caring \"\n",
    "    \"about animals, are some of the most hypocritical people I’ve \"\n",
    "    \"ever met. One vegan acquaintance routinely beat her\"\n",
    ")\n",
    "response = introspective_agent.chat(harmful_text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "76fb45f1-d384-4ff7-b50d-c7d985f785d9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "People who choose not to eat meat for ethical reasons related to animal welfare are often passionate about their beliefs. It's important to engage in respectful discussions about dietary choices and their impact on animal welfare.\n"
     ]
    }
   ],
   "source": [
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2b1d75a-3aa1-460d-af4a-8bf79215bb2b",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide Twenty](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/20.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "30cbac4d-170c-4a28-8211-92851f107e1d",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide TwentyOne](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/21.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3173d0d2-2d66-4d69-bd37-fafa18016c1a",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide TwentyTwo](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/22.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "da9138fa-2899-494d-bc61-bc3d54085035",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide TwentyThree](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/23.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b4c1ef8-369a-42ba-a928-8bceabe4be36",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide TwentyFour](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/24.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7fe2f7b7-886a-4986-a189-d60b4abb8a0a",
   "metadata": {},
   "source": [
    "## Example: LlamaReaders & LlamaPacks\n",
    "\n",
    "All of our integrations and packs can be discovered at [llamahub.ai](https://llamahub.ai). All of our packages/integrations are their own Python package that can be downloaded from PyPi."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "10edddf8-b4a9-42cb-b22b-29af22ce5ae7",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-index-readers-wikipedia -q\n",
    "%pip install wikipedia -q"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "89339ea9-d4a7-4f99-85ee-ccbf18c653dd",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.readers.wikipedia import WikipediaReader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3e829561-7178-4872-9353-55380cef6f95",
   "metadata": {},
   "outputs": [],
   "source": [
    "# now these docs can be used for RAG\n",
    "cities = [\"Toronto\", \"Berlin\", \"Tokyo\"]\n",
    "wiki_docs = WikipediaReader().load_data(pages=cities)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60480e8a-2089-4379-8f9b-856013f89e5b",
   "metadata": {},
   "outputs": [],
   "source": [
    "wiki_docs[0].text[:500]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5dbbc623-7ca4-4796-bc13-4b9eef3f6eaf",
   "metadata": {},
   "source": [
    "[Toronto Wikipedia Page](https://en.wikipedia.org/wiki/Toronto)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6fcc23d6-8824-43e3-949e-02b3a4a42a5d",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide TwentyFive](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/25.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6dd92deb-ec04-413e-bec6-0884add12ba6",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide TwentySix](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/26.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7958816e-2589-40c1-948a-d683866bef5b",
   "metadata": {},
   "source": [
    "![Divider Image](https://d3ddy8balm3goa.cloudfront.net/mlops-rag-workshop/divider-2.excalidraw.svg)\n",
    "![Slide TwentySeven](https://d3ddy8balm3goa.cloudfront.net/rbc-llm-workshop/27-updated.svg)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rbc-llm-workshop",
   "language": "python",
   "name": "rbc-llm-workshop"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
