{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iel3-lQX6bZv"
      },
      "source": [
        "# HW3 以API快速搭建自己的應用\n",
        "Objective:\n",
        "- Understand how to build your own Language Model Application by calling API and feeding prompts.\n",
        "\n",
        "In this homework, you only need to choose **ONE** API.\n",
        "- **Gemini**: Free but slower.\n",
        "- **ChatGPT**: Faster but may cause you money, make sure you have some free usage.\n",
        "\n",
        "\n",
        "**Homework slide**: https://docs.google.com/presentation/d/1fAXUpvAxmTQEQrIhJxcPz4AlAhY5hLsC6b9p5TrMAIU/edit?usp=sharing\n",
        "\n",
        "**To Start:**\n",
        "Hit one of these two buttons to unfold the blcoks.![image.png]()\n",
        "\n",
        "**Remember**, if you decide to use gemini. You should only execute the code blocks under **\"Gemini API\"**. You can use the **目錄** in the left side to locate youself.![image.png]()\n",
        "\n",
        "If you have any questions, please contact the TAs via TA hours, NTU COOL, or email to ntu-gen-ai-2024-spring-ta@googlegroups.com"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Xuh2R5DQSrcO"
      },
      "source": [
        "# ChatGPT API"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "disKjtBSPabw"
      },
      "source": [
        "## Install Packages\n",
        "Install all the necessary packages, it may take some time."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "CdxZQJwrPbQL",
        "outputId": "c7950101-0b2f-427f-ea83-221758d89827"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Requirement already satisfied: openai in /usr/local/lib/python3.11/dist-packages (1.59.9)\n",
            "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.11/dist-packages (from openai) (3.7.1)\n",
            "Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.11/dist-packages (from openai) (1.9.0)\n",
            "Requirement already satisfied: httpx<1,>=0.23.0 in /usr/local/lib/python3.11/dist-packages (from openai) (0.28.1)\n",
            "Requirement already satisfied: jiter<1,>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from openai) (0.8.2)\n",
            "Requirement already satisfied: pydantic<3,>=1.9.0 in /usr/local/lib/python3.11/dist-packages (from openai) (2.10.6)\n",
            "Requirement already satisfied: sniffio in /usr/local/lib/python3.11/dist-packages (from openai) (1.3.1)\n",
            "Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.11/dist-packages (from openai) (4.67.1)\n",
            "Requirement already satisfied: typing-extensions<5,>=4.11 in /usr/local/lib/python3.11/dist-packages (from openai) (4.12.2)\n",
            "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.11/dist-packages (from anyio<5,>=3.5.0->openai) (3.10)\n",
            "Requirement already satisfied: certifi in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->openai) (2024.12.14)\n",
            "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->openai) (1.0.7)\n",
            "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.11/dist-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai) (0.14.0)\n",
            "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<3,>=1.9.0->openai) (0.7.0)\n",
            "Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.11/dist-packages (from pydantic<3,>=1.9.0->openai) (2.27.2)\n",
            "Collecting gradio==4.44.1\n",
            "  Downloading gradio-4.44.1-py3-none-any.whl.metadata (15 kB)\n",
            "Requirement already satisfied: aiofiles<24.0,>=22.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (23.2.1)\n",
            "Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (3.7.1)\n",
            "Requirement already satisfied: fastapi<1.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.115.8)\n",
            "Requirement already satisfied: ffmpy in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.5.0)\n",
            "Collecting gradio-client==1.3.0 (from gradio==4.44.1)\n",
            "  Downloading gradio_client-1.3.0-py3-none-any.whl.metadata (7.1 kB)\n",
            "Requirement already satisfied: httpx>=0.24.1 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.28.1)\n",
            "Requirement already satisfied: huggingface-hub>=0.19.3 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.27.1)\n",
            "Requirement already satisfied: importlib-resources<7.0,>=1.3 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (6.5.2)\n",
            "Requirement already satisfied: jinja2<4.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (3.1.5)\n",
            "Requirement already satisfied: markupsafe~=2.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (2.1.5)\n",
            "Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (3.10.0)\n",
            "Requirement already satisfied: numpy<3.0,>=1.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (1.26.4)\n",
            "Requirement already satisfied: orjson~=3.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (3.10.15)\n",
            "Requirement already satisfied: packaging in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (24.2)\n",
            "Requirement already satisfied: pandas<3.0,>=1.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (2.2.2)\n",
            "Requirement already satisfied: pillow<11.0,>=8.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (10.4.0)\n",
            "Requirement already satisfied: pydantic>=2.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (2.10.6)\n",
            "Requirement already satisfied: pydub in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.25.1)\n",
            "Requirement already satisfied: python-multipart>=0.0.9 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.0.20)\n",
            "Requirement already satisfied: pyyaml<7.0,>=5.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (6.0.2)\n",
            "Requirement already satisfied: ruff>=0.2.2 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.9.4)\n",
            "Requirement already satisfied: semantic-version~=2.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (2.10.0)\n",
            "Requirement already satisfied: tomlkit==0.12.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.12.0)\n",
            "Requirement already satisfied: typer<1.0,>=0.12 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.15.1)\n",
            "Requirement already satisfied: typing-extensions~=4.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (4.12.2)\n",
            "Requirement already satisfied: urllib3~=2.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (2.3.0)\n",
            "Requirement already satisfied: uvicorn>=0.14.0 in /usr/local/lib/python3.11/dist-packages (from gradio==4.44.1) (0.34.0)\n",
            "Requirement already satisfied: fsspec in /usr/local/lib/python3.11/dist-packages (from gradio-client==1.3.0->gradio==4.44.1) (2024.10.0)\n",
            "Requirement already satisfied: websockets<13.0,>=10.0 in /usr/local/lib/python3.11/dist-packages (from gradio-client==1.3.0->gradio==4.44.1) (11.0.3)\n",
            "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.11/dist-packages (from anyio<5.0,>=3.0->gradio==4.44.1) (3.10)\n",
            "Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.11/dist-packages (from anyio<5.0,>=3.0->gradio==4.44.1) (1.3.1)\n",
            "Requirement already satisfied: starlette<0.46.0,>=0.40.0 in /usr/local/lib/python3.11/dist-packages (from fastapi<1.0->gradio==4.44.1) (0.45.3)\n",
            "Requirement already satisfied: certifi in /usr/local/lib/python3.11/dist-packages (from httpx>=0.24.1->gradio==4.44.1) (2024.12.14)\n",
            "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/dist-packages (from httpx>=0.24.1->gradio==4.44.1) (1.0.7)\n",
            "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.11/dist-packages (from httpcore==1.*->httpx>=0.24.1->gradio==4.44.1) (0.14.0)\n",
            "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.19.3->gradio==4.44.1) (3.17.0)\n",
            "Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.19.3->gradio==4.44.1) (2.32.3)\n",
            "Requirement already satisfied: tqdm>=4.42.1 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.19.3->gradio==4.44.1) (4.67.1)\n",
            "Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib~=3.0->gradio==4.44.1) (1.3.1)\n",
            "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.11/dist-packages (from matplotlib~=3.0->gradio==4.44.1) (0.12.1)\n",
            "Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.11/dist-packages (from matplotlib~=3.0->gradio==4.44.1) (4.55.7)\n",
            "Requirement already satisfied: kiwisolver>=1.3.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib~=3.0->gradio==4.44.1) (1.4.8)\n",
            "Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib~=3.0->gradio==4.44.1) (3.2.1)\n",
            "Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.11/dist-packages (from matplotlib~=3.0->gradio==4.44.1) (2.8.2)\n",
            "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.11/dist-packages (from pandas<3.0,>=1.0->gradio==4.44.1) (2024.2)\n",
            "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.11/dist-packages (from pandas<3.0,>=1.0->gradio==4.44.1) (2025.1)\n",
            "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic>=2.0->gradio==4.44.1) (0.7.0)\n",
            "Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.11/dist-packages (from pydantic>=2.0->gradio==4.44.1) (2.27.2)\n",
            "Requirement already satisfied: click>=8.0.0 in /usr/local/lib/python3.11/dist-packages (from typer<1.0,>=0.12->gradio==4.44.1) (8.1.8)\n",
            "Requirement already satisfied: shellingham>=1.3.0 in /usr/local/lib/python3.11/dist-packages (from typer<1.0,>=0.12->gradio==4.44.1) (1.5.4)\n",
            "Requirement already satisfied: rich>=10.11.0 in /usr/local/lib/python3.11/dist-packages (from typer<1.0,>=0.12->gradio==4.44.1) (13.9.4)\n",
            "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.11/dist-packages (from python-dateutil>=2.7->matplotlib~=3.0->gradio==4.44.1) (1.17.0)\n",
            "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/dist-packages (from rich>=10.11.0->typer<1.0,>=0.12->gradio==4.44.1) (3.0.0)\n",
            "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.11/dist-packages (from rich>=10.11.0->typer<1.0,>=0.12->gradio==4.44.1) (2.18.0)\n",
            "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests->huggingface-hub>=0.19.3->gradio==4.44.1) (3.4.1)\n",
            "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/dist-packages (from markdown-it-py>=2.2.0->rich>=10.11.0->typer<1.0,>=0.12->gradio==4.44.1) (0.1.2)\n",
            "Downloading gradio-4.44.1-py3-none-any.whl (18.1 MB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m18.1/18.1 MB\u001b[0m \u001b[31m38.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading gradio_client-1.3.0-py3-none-any.whl (318 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m318.7/318.7 kB\u001b[0m \u001b[31m22.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hInstalling collected packages: gradio-client, gradio\n",
            "  Attempting uninstall: gradio-client\n",
            "    Found existing installation: gradio_client 0.16.1\n",
            "    Uninstalling gradio_client-0.16.1:\n",
            "      Successfully uninstalled gradio_client-0.16.1\n",
            "  Attempting uninstall: gradio\n",
            "    Found existing installation: gradio 4.29.0\n",
            "    Uninstalling gradio-4.29.0:\n",
            "      Successfully uninstalled gradio-4.29.0\n",
            "Successfully installed gradio-4.44.1 gradio-client-1.3.0\n"
          ]
        }
      ],
      "source": [
        "# install required packages\n",
        "!pip install openai\n",
        "!pip install gradio"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Xx4EsADZmzxQ"
      },
      "source": [
        "## Import and Setup\n",
        "**Remember to fill in your OpenAI API in this block, tutorial to get the api is in our homework3 slides. Please make double sure not to share your api with anyone else.**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "8AcOCl6lSU2X",
        "outputId": "6a3eae53-061a-4113-a058-b4044827058a"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "1.40.1\n",
            "Set ChatGPT API sucessfully!!\n"
          ]
        }
      ],
      "source": [
        "# import the packages\n",
        "import openai\n",
        "import gradio as gr\n",
        "import json\n",
        "from typing import List, Dict, Tuple\n",
        "import os\n",
        "OPENAI_API_KEY = os.environ.get(\"OPENAI_API_KEY\")\n",
        "# openai.api_key  = OPENAI_API_KEY # 更换成您自己的key\n",
        "\n",
        "# from dotenv import load_dotenv, find_dotenv\n",
        "# _ = load_dotenv(find_dotenv()) # 读取本地的.env环境文件。（推荐后续使用这种方法，将 key 放在 .env 文件里。保护自己的 key）\n",
        "# openai.api_key  = 'sk-***' # 更换成您自己的key\n",
        "\n",
        "print(openai.__version__)\n",
        "## TODO: Fill in your OpenAI api in the \"\" part\n",
        "client = openai.OpenAI(api_key=OPENAI_API_KEY)\n",
        "\n",
        "# Check if you have set your ChatGPT API successfully\n",
        "# You should see \"Set ChatGPT API sucessfully!!\" if nothing goes wrong.\n",
        "try:\n",
        "    response = client.chat.completions.create(\n",
        "            model=\"gpt-3.5-turbo\",\n",
        "            messages = [{'role':'user','content': \"test\"}],\n",
        "            max_tokens=1,\n",
        "    )\n",
        "    print(\"Set ChatGPT API sucessfully!!\")\n",
        "except:\n",
        "    print(\"There seems to be something wrong with your ChatGPT API. Please follow our demonstration in the slide to get a correct one.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "g322vfjqVJGm"
      },
      "source": [
        "## Part 1: Summarization 总结摘要\n",
        "\n",
        "In this task, you are asked to prompt your chatbot into a **summarizer.** It's job is when the user inputs an article, it can summarize the article for the user.\n",
        "\n",
        "You need to:\n",
        "1. Come up a prompt for summarization and fill it in **prompt_for_summarization**.\n",
        "2. **Hit the run button![image.png](). (The run button will turn into this state![image.png]() when sucessfully executed.)** It will pop up an interface that looks like this: (It may look a little bit different if you use dark mode.)![image.png]()\n",
        "3. Find an article on your own or use our example article, and fill it in the block named \"Article\".\n",
        "4. Hit the \"Send\" button to produce the summarization. (You can use the \"Temperature\" slide to control the creativeness of the output.)\n",
        "5. If you **want to change your prompt**, hit the run button again to stop the cell. Then go back to step 1.\n",
        "6. After you get the desired result, hit the button \"Export\" to save your result. There will be a file named part1.json appears in the file list. **Remember to download it to your own computer before it disappears.**\n",
        "\n",
        "Note:\n",
        "\n",
        "\n",
        "*  **If you hit the \"Export\" button again, the previous result will be covered, so make sure to download it first.**\n",
        "*  **You should keep in mind that even with the exact same prompt, the output might still differ.**\n",
        "\n",
        "\n",
        "---\n",
        "\n",
        "\n",
        "Before you run this cell, make sure you have run both **Install Packages** and **Import and Setup**.\n",
        "\n",
        "**Remember to stop this cell before you go on to the next one.**\n",
        "![image.png]() means the cell is  running, ![image.png]() means the cell is idle."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 538
        },
        "id": "a8t6IKgLVQvF",
        "outputId": "0fc3bfe0-a8a9-4fc9-9ea6-102aea879863"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running on local URL:  http://127.0.0.1:7862\n"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "d:\\Anaconda\\envs\\langchain\\lib\\site-packages\\gradio\\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade. \n",
            "--------\n",
            "  warnings.warn(\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "<div><iframe src=\"http://127.0.0.1:7862/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": []
          },
          "execution_count": 6,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "## TODO: Input the prompt in the \"\"\n",
        "prompt_for_summarization = \"请将以下文章概括成几句话。\"\n",
        "\n",
        "# function to reset the conversation\n",
        "def reset() -> List:\n",
        "    return []\n",
        "\n",
        "# function to call the model to generate\n",
        "def interact_summarization(prompt: str, article: str, temp = 1.0) -> List[Tuple[str, str]]:\n",
        "    '''\n",
        "    * Arguments\n",
        "\n",
        "      - prompt: the prompt that we use in this section\n",
        "\n",
        "      - article: the article to be summarized\n",
        "\n",
        "      - temp: the temperature parameter of this model. Temperature is used to control the output of the chatbot.\n",
        "              The higher the temperature is, the more creative response you will get.\n",
        "\n",
        "    '''\n",
        "    input = f\"{prompt}\\n{article}\"\n",
        "    response = client.chat.completions.create(\n",
        "            model=\"gpt-3.5-turbo\",\n",
        "            messages = [{'role':'user','content': input}],\n",
        "            temperature = temp,\n",
        "            max_tokens=200,\n",
        "    )\n",
        "\n",
        "    return [(input, response.choices[0].message.content)]\n",
        "\n",
        "# function to export the whole conversation\n",
        "def export_summarization(chatbot: List[Tuple[str, str]], article: str) -> None:\n",
        "    '''\n",
        "    * Arguments\n",
        "\n",
        "      - chatbot: the model itself, the conversation is stored in list of tuples\n",
        "\n",
        "      - article: the article to be summarized\n",
        "\n",
        "    '''\n",
        "    target = {\"chatbot\": chatbot, \"article\": article}\n",
        "    with open(\"part1.json\", \"w\") as file:\n",
        "        json.dump(target, file)\n",
        "\n",
        "# this part generates the Gradio UI interface\n",
        "with gr.Blocks() as demo:\n",
        "    gr.Markdown(\"# Part1: Summarization\\nFill in any article you like and let the chatbot summarize it for you!!\")\n",
        "    chatbot = gr.Chatbot()\n",
        "    prompt_textbox = gr.Textbox(label=\"Prompt\", value=prompt_for_summarization, visible=False)\n",
        "    article_textbox = gr.Textbox(label=\"Article\", interactive = True, value = \"With house prices soaring, it's not easy finding somewhere to live. And this community has thrown in the towel. Meet Seattle's rolling neighborhood of RVs, where each unassuming vehicle is a capsule home. The unusual format has been captured in a series of photographs by visual journalist Anna Erickson. Meet Bud Dodson, 57, and welcome to his home: An RV in Seattle's SoDo where he watches over the parking lot in exchange for a spot . No place like home: John Warden, 52, has turned his $200 vehicle into his home after his apartment burned down years ago . There are around 30 drivers that float in and out of this parking lot in the SoDo (South of Downtown) area of the city in Washington State. One might not notice them in the mornings as hundreds of workers in the nearby factories, such as Starbucks, park up and rush into work. But on the weekends, as the rabble flocks back to their beds, this unique group remains. John Worden, 52, has been living in his vehicle for years since his apartment burned down and he was left homeless. He told Anna his car cost $200, and doesn't drive very well. But for a home, it's just about enough. Though plan on the outside, it is a Pandora's Box inside, Anna tells DailyMail.com. 'It was scattered with trinkets that he had been collecting over the years,' she explained, 'and a pile of beer cans that he was saving to turn in for money.' For work, he panhandles while helping people find parking spaces at Safeco Field stadium, where he used to be a cook. People come and go for work in the factories nearby, but on the weekend it is just the RV-dwellers that area left . Daily life: Here Bud can be seen preparing himself a barbecue on the gravel outside his capsule home, one of about 30 in the community . Eclectic: While Bud's RV is organized and functional, John's is full of trinkets and belongings dating back years . Alongside him - most of the time - is Bud Dodson, 57. While some are forced to move about regularly, Dodson, a maintenance man, looks after the parking lot in exchange for a semi-permanent spot. His home has its own unique stamp on it. 'He had really made the RV his home and taken good care of it,' Anna described. 'It was more functional [than John's] and a cleaner space with a bed, kitchen and bathroom.' Whether organized or eclectic, however, each one is home. 'None of them seem to want to move on,' Anna said. 'It's not perfect but they seem pretty content. Move in, move out: Some have agreements to stay, but others have to keep driving around to find a spot . John works as a panhandler at Safeco Fields stadium, where he used to work as a cook . He is content with his life in between the usual confines of society . Personal: To many this may just seem like a parking lot but for these men it is a very personal space . 'Bud is very grateful, he said the parking lot owner is just such a nice guy to let him live like this.' She came across them when she stopped to ask a seemingly homeless man for directions. 'We got talking,' she said, 'and he mentioned that he lived nearby in an RV. I went round to look and there was a whole bunch of them.' Curious, she spent about two months returning to the spot, meeting with the community and building their trust. 'These RVs are their homes so it's a very personal thing,' she explained.\")\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"#  Temperature\\n Temperature is used to control the output of the chatbot. The higher the temperature is, the more creative response you will get.\")\n",
        "        temperature_slider = gr.Slider(0.0, 2.0, 1.0, step = 0.1, label=\"Temperature\")\n",
        "    with gr.Row():\n",
        "        sent_button = gr.Button(value=\"Send\")\n",
        "        reset_button = gr.Button(value=\"Reset\")\n",
        "\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"#  Save your Result.\\n After you get a satisfied result. Click the export button to recode it.\")\n",
        "        export_button = gr.Button(value=\"Export\")\n",
        "    sent_button.click(interact_summarization, inputs=[prompt_textbox, article_textbox, temperature_slider], outputs=[chatbot])\n",
        "    reset_button.click(reset, outputs=[chatbot])\n",
        "    export_button.click(export_summarization, inputs=[chatbot, article_textbox])\n",
        "\n",
        "\n",
        "demo.launch(share=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LltjuF68krJH"
      },
      "source": [
        "### Check and print your result\n",
        "This part is for you to check whether your \"part1.json\" file contains all the correct contexts. After that, you can copy the result to our grading system and get your score.\n",
        "\n",
        "You should:\n",
        "1. Make sure the file list has the \"part1.json\" you want to hand over to the grading system. (If not, you can upload it using the upload button.)\n",
        "2. Hit the run button. It will appear as a **frozen** gradio interface that recreates your result.\n",
        "3. After you confirm that the result is correct, you can scroll down to the part that states: **\"Copy this part to the grading system.\"** There should be two blocks one labeled as **\"article\"**, and the other one with **\"summarization\"**. Copy the corresponding value to the correct block in our grading system. Then hit \"Submit\" to get the score.\n",
        "![image.png]()\n",
        "---\n",
        "\n",
        "\n",
        "Before you run this cell, make sure you have run both **Install Packages** and **Import and Setup**.\n",
        "\n",
        "**Remember to stop this cell before you go on to the next one.**\n",
        "![image.png]() means the cell is  running, ![image.png]() means the cell is idle."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "MTnClEiYkyGo"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running on local URL:  http://127.0.0.1:7862\n",
            "\n",
            "To create a public link, set `share=True` in `launch()`.\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "<div><iframe src=\"http://127.0.0.1:7862/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "d:\\Anaconda\\envs\\langchain\\lib\\site-packages\\gradio\\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade. \n",
            "--------\n",
            "  warnings.warn(\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Keyboard interruption in main thread... closing server.\n"
          ]
        },
        {
          "data": {
            "text/plain": []
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# load the conversation log json file\n",
        "with open(\"part1.json\", \"r\") as f:\n",
        "    context = json.load(f)\n",
        "\n",
        "chatbot = context['chatbot']\n",
        "article = context['article']\n",
        "summarization = chatbot[0][-1]\n",
        "\n",
        "# this part constructs the Gradio UI interface\n",
        "with gr.Blocks() as demo:\n",
        "    gr.Markdown(\"# Part1: Summarization\\nFill in any article you like and let the chatbot summarize it for you!!\")\n",
        "    chatbot = gr.Chatbot(value = context['chatbot'])\n",
        "    article_textbox = gr.Textbox(label=\"Article\", interactive = False, value = context['article'])\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"# Copy this part to the grading system.\")\n",
        "        gr.Textbox(label = \"article\", value = article, show_copy_button = True)\n",
        "        gr.Textbox(label=\"summarization\", value = summarization, show_copy_button = True)\n",
        "\n",
        "demo.launch(share = True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "X53VCaSCa_kT"
      },
      "source": [
        "## Part 2: Role-Play 角色扮演\n",
        "In this task, you are asked to prompt your chatbot into **playing a roleplaying game**. You should assign it a character, then prompt it into that character.\n",
        "\n",
        "You need to:\n",
        "1. Come up with a **character** you want the chatbot to act and the prompt to make the chatbot into that character. Fill the character in **character_for_chatbot**, and fill the prompt in **prompt_for_roleplay**.\n",
        "2. **Hit the run button![image.png](). (The run button will turn into this state![image.png]() when sucessfully executed.)** It will pop up an interface that looks like this: (It May look a little bit different if you use dark mode.)\n",
        "![image.png]()\n",
        "3. Interact with the chatbot for **2 rounds**. Type what you want to say in the block \"Input\", then hit the button \"Send\". (You can use the \"Temperature\" slide to control the creativeness of the output.)\n",
        "4. If you **want to change your prompt or the character**, hit the run button again to stop the cell. Then go back to step 1.\n",
        "5. After you get the desired result, hit the button \"Export\" to save your result. There will be a file named part2.json appears in the file list. **Remember to download it to your own computer before it disappears.**\n",
        "\n",
        "Note:\n",
        "\n",
        "\n",
        "*  **If you hit the \"Export\" button again, the previous result will be covered, so make sure to download it first.**\n",
        "*  **You should keep in mind that even with the exact same prompt, the output might still differ.**\n",
        "---\n",
        "\n",
        "\n",
        "Before you run this cell, make sure you have run both **Install Packages** and **Import and Setup**.\n",
        "\n",
        "**Remember to stop this cell before you go on to the next one.**\n",
        "![image.png]() means the cell is  running, ![image.png]() means the cell is idle."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "ImUk4GsFa-70"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running on local URL:  http://127.0.0.1:7863\n",
            "\n",
            "To create a public link, set `share=True` in `launch()`.\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "<div><iframe src=\"http://127.0.0.1:7863/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Keyboard interruption in main thread... closing server.\n"
          ]
        },
        {
          "data": {
            "text/plain": []
          },
          "execution_count": 10,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# TODO: Fill in the below two lines: character_for_chatbot and prompt_for_roleplay\n",
        "# The first one is the character you want your chatbot to play\n",
        "# The second one is the prompt to make the chatbot be a certain character\n",
        "character_for_chatbot = \"面试官\"\n",
        "prompt_for_roleplay = \"我需要你对我AI相关的面试，仅提出问题\"\n",
        "\n",
        "# function to clear the conversation\n",
        "def reset() -> List:\n",
        "    return []\n",
        "\n",
        "# function to call the model to generate\n",
        "def interact_roleplay(chatbot: List[Tuple[str, str]], user_input: str, temp=1.0) -> List[Tuple[str, str]]:\n",
        "    '''\n",
        "    * Arguments\n",
        "\n",
        "      - user_input: the user input of each round of conversation\n",
        "\n",
        "      - temp: the temperature parameter of this model. Temperature is used to control the output of the chatbot.\n",
        "              The higher the temperature is, the more creative response you will get.\n",
        "\n",
        "    '''\n",
        "    try:\n",
        "        messages = []\n",
        "        for input_text, response_text in chatbot:\n",
        "            messages.append({'role': 'user', 'content': input_text})\n",
        "            messages.append({'role': 'assistant', 'content': response_text})\n",
        "\n",
        "        messages.append({'role': 'user', 'content': user_input})\n",
        "\n",
        "        response = client.chat.completions.create(\n",
        "            model=\"gpt-3.5-turbo\",\n",
        "            messages = messages,\n",
        "            temperature = temp,\n",
        "            max_tokens=200,\n",
        "        )\n",
        "        chatbot.append((user_input, response.choices[0].message.content))\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"Error occurred: {e}\")\n",
        "        chatbot.append((user_input, f\"Sorry, an error occurred: {e}\"))\n",
        "    return chatbot\n",
        "\n",
        "# function to export the whole conversation log\n",
        "def export_roleplay(chatbot: List[Tuple[str, str]], description: str) -> None:\n",
        "    '''\n",
        "    * Arguments\n",
        "\n",
        "      - chatbot: the model itself, the conversation is stored in list of tuples\n",
        "\n",
        "      - description: the description of this task\n",
        "\n",
        "    '''\n",
        "    target = {\"chatbot\": chatbot, \"description\": description}\n",
        "    with open(\"part2.json\", \"w\") as file:\n",
        "        json.dump(target, file)\n",
        "\n",
        "first_dialogue = interact_roleplay([], prompt_for_roleplay)\n",
        "\n",
        "# this part constructs the Gradio UI interface\n",
        "with gr.Blocks() as demo:\n",
        "    gr.Markdown(f\"# Part2: Role Play\\nThe chatbot wants to play a role game with you, try interacting with it!!\")\n",
        "    chatbot = gr.Chatbot(value = first_dialogue)\n",
        "    description_textbox = gr.Textbox(label=f\"The character the bot is playing\", interactive = False, value=f\"{character_for_chatbot}\")\n",
        "    input_textbox = gr.Textbox(label=\"Input\", value = \"\")\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"#  Temperature\\n Temperature is used to control the output of the chatbot. The higher the temperature is, the more creative response you will get.\")\n",
        "        temperature_slider = gr.Slider(0.0, 2.0, 1.0, step = 0.1, label=\"Temperature\")\n",
        "    with gr.Row():\n",
        "        sent_button = gr.Button(value=\"Send\")\n",
        "        reset_button = gr.Button(value=\"Reset\")\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"#  Save your Result.\\n After you get a satisfied result. Click the export button to recode it.\")\n",
        "        export_button = gr.Button(value=\"Export\")\n",
        "    sent_button.click(interact_roleplay, inputs=[chatbot, input_textbox, temperature_slider], outputs=[chatbot])\n",
        "    reset_button.click(reset, outputs=[chatbot])\n",
        "    export_button.click(export_roleplay, inputs=[chatbot, description_textbox])\n",
        "\n",
        "\n",
        "demo.launch(debug = True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Yb3nt52qk1nn"
      },
      "source": [
        "### Check and print your result\n",
        "This part is for you to check whether your \"part2.json\" file contains all the correct contexts. After that, you can copy the result to our grading system and get your score.\n",
        "\n",
        "You should:\n",
        "1. Make sure the file list has the \"part2.json\" you want to hand over to the grading system. (If not, you can upload it using the upload button.)\n",
        "2. Hit the run button. It will appear as a **frozen** gradio interface that recreates your result.\n",
        "3. After you confirm that the result is correct, you can scroll down to the part that states: **\"Copy this part to the grading system.\"** There should be two blocks one labeled as **\"role\"**, and the other one with **\"dialogue\"**. Copy the corresponding value to the correct block in our grading system. Then hit \"Submit\" to get the score.\n",
        "![image.png]()\n",
        "---\n",
        "\n",
        "\n",
        "Before you run this cell, make sure you have run both **Install Packages** and **Import and Setup**.\n",
        "\n",
        "**Remember to stop this cell before you go on to the next one.**\n",
        "![image.png]() means the cell is  running, ![image.png]() means the cell is idle."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4E5D57Rvk1nn"
      },
      "outputs": [],
      "source": [
        "# loads the conversation log json file\n",
        "with open(\"part2.json\", \"r\") as f:\n",
        "    context = json.load(f)\n",
        "\n",
        "# traverse through and load the conversation log properly\n",
        "chatbot = context['chatbot']\n",
        "role = context['description']\n",
        "dialogue = \"\"\n",
        "for i, (user, bot) in enumerate(chatbot):\n",
        "    if i != 0:\n",
        "        dialogue += f\"User: {user}\\n\"\n",
        "    dialogue += f\"Bot: {bot}\\n\"\n",
        "\n",
        "# this part constructs the Gradio UI interface\n",
        "with gr.Blocks() as demo:\n",
        "    gr.Markdown(f\"# Part2: Role Play\\nThe chatbot wants to play a role game with you, try interacting with it!!\")\n",
        "    chatbot = gr.Chatbot(value = context['chatbot'])\n",
        "    description_textbox = gr.Textbox(label=f\"The character the bot is playing\", interactive = False, value=context['description'])\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"# Copy this part to the grading system.\")\n",
        "        gr.Textbox(label = \"role\", value = role, show_copy_button = True)\n",
        "        gr.Textbox(label = \"dialogue\", value = dialogue, show_copy_button = True)\n",
        "\n",
        "demo.launch(debug = True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BTrh4gK6ZdgN"
      },
      "source": [
        "## Part 3: Customized Task 多轮对话应用\n",
        "In this part, you are asked to prompt your chatbot into capable of performing a certain task. You should first come up with a task you want your chatbot to perform, then prompt it into performing that task.\n",
        "\n",
        "You need to:\n",
        "1. Come up with a task and the prompt according to it. Fill the task description in **chatbot_task** and the prompt in **promot_for_task**.\n",
        "2. **Hit the run button![image.png](). (The run button will turn into this state![image.png]() when sucessfully executed.)** It will pop up an interface that looks like this: (It May look a little bit different if you use dark mode.)\n",
        "![image.png]()\n",
        "3. Interact with it for **less than 3 rounds**. Type your input in \"Input\" and hit the button \"Send\". (You can use the \"Temperature\" slide to control the creativeness of the output.)\n",
        "4. If you **want to change your prompt or the task**, hit the run button again to stop the cell. Then go back to step 1.\n",
        "5. After you get the desired result, hit the button \"Export\" to save your result. There will be a file named part3.json appears in the file list. **Remember to download it to your own computer before it disappears.**\n",
        "\n",
        "Note:\n",
        "\n",
        "\n",
        "*  **If you hit the \"Export\" button again, the previous result will be covered, so make sure to download it first.**\n",
        "*  **You should keep in mind that even with the exact same prompt, the output might still differ.**\n",
        "\n",
        "\n",
        "---\n",
        "\n",
        "\n",
        "Before you run this cell, make sure you have run both **Install Packages** and **Import and Setup**.\n",
        "\n",
        "**Remember to stop this cell before you go on to the next one.**\n",
        "![image.png]() means the cell is  running, ![image.png]() means the cell is idle.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "lBWjkrXbZgfL"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Running on local URL:  http://127.0.0.1:7863\n",
            "\n",
            "To create a public link, set `share=True` in `launch()`.\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "<div><iframe src=\"http://127.0.0.1:7863/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "d:\\Anaconda\\envs\\langchain\\lib\\site-packages\\gradio\\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade. \n",
            "--------\n",
            "  warnings.warn(\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Keyboard interruption in main thread... closing server.\n"
          ]
        },
        {
          "data": {
            "text/plain": []
          },
          "execution_count": 9,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# TODO: Fill in the below two lines: chatbot_task and chatbot_task\n",
        "# The first is for you to tell the user that the chatbot can perform certain task\n",
        "# The second one is the prompt that make the chatbot able to do certain task\n",
        "chatbot_task = \"小学数学老师（输入“开始”）\"\n",
        "prompt_for_task = \"现在开始，你将扮演一个出小学数学题的老师，当我说开始时提供一个简单的数学题，接收到正确回答后进行下一题，否则给我答案\"\n",
        "\n",
        "# function to clear the conversation\n",
        "def reset() -> List:\n",
        "    return []\n",
        "\n",
        "# function to call the model to generate\n",
        "def interact_customize(chatbot: List[Tuple[str, str]], prompt: str ,user_input: str, temperature = 1.0) -> List[Tuple[str, str]]:\n",
        "    '''\n",
        "    * Arguments\n",
        "\n",
        "      - chatbot: the model itself, the conversation is stored in list of tuples\n",
        "\n",
        "      - prompt: the prompt for your desginated task\n",
        "\n",
        "      - user_input: the user input of each round of conversation\n",
        "\n",
        "      - temp: the temperature parameter of this model. Temperature is used to control the output of the chatbot.\n",
        "              The higher the temperature is, the more creative response you will get.\n",
        "\n",
        "    '''\n",
        "    try:\n",
        "        messages = []\n",
        "        messages.append({'role': 'user', 'content': prompt})\n",
        "        for input_text, response_text in chatbot:\n",
        "            messages.append({'role': 'user', 'content': input_text})\n",
        "            messages.append({'role': 'assistant', 'content': response_text})\n",
        "\n",
        "        messages.append({'role': 'user', 'content': user_input})\n",
        "\n",
        "        response = client.chat.completions.create(\n",
        "            model=\"gpt-3.5-turbo\",\n",
        "            messages = messages,\n",
        "            temperature = temperature,\n",
        "            max_tokens=200,\n",
        "        )\n",
        "\n",
        "        chatbot.append((user_input, response.choices[0].message.content))\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"Error occurred: {e}\")\n",
        "        chatbot.append((user_input, f\"Sorry, an error occurred: {e}\"))\n",
        "    return chatbot\n",
        "\n",
        "# function to export the whole conversation log\n",
        "def export_customized(chatbot: List[Tuple[str, str]], description: str) -> None:\n",
        "    '''\n",
        "    * Arguments\n",
        "\n",
        "      - chatbot: the model itself, the conversation is stored in list of tuples\n",
        "\n",
        "      - description: the description of this task\n",
        "\n",
        "    '''\n",
        "    target = {\"chatbot\": chatbot, \"description\": description}\n",
        "    with open(\"part3.json\", \"w\") as file:\n",
        "        json.dump(target, file)\n",
        "\n",
        "# this part constructs the Gradio UI interface\n",
        "with gr.Blocks() as demo:\n",
        "    gr.Markdown(\"# Part3: Customized task\\nThe chatbot is able to perform a certain task. Try to interact with it!!\")\n",
        "    chatbot = gr.Chatbot()\n",
        "    desc_textbox = gr.Textbox(label=\"Description of the task\", value=chatbot_task, interactive=False)\n",
        "    prompt_textbox = gr.Textbox(label=\"Prompt\", value=prompt_for_task, visible=False)\n",
        "    input_textbox = gr.Textbox(label=\"Input\")\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"#  Temperature\\n Temperature is used to control the output of the chatbot. The higher the temperature is, the more creative response you will get.\")\n",
        "        temperature_slider = gr.Slider(0.0, 2.0, 1.0, step = 0.1, label=\"Temperature\")\n",
        "    with gr.Row():\n",
        "        sent_button = gr.Button(value=\"Send\")\n",
        "        reset_button = gr.Button(value=\"Reset\")\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"#  Save your Result.\\n After you get a satisfied result. Click the export button to recode it.\")\n",
        "        export_button = gr.Button(value=\"Export\")\n",
        "    sent_button.click(interact_customize, inputs=[chatbot, prompt_textbox, input_textbox, temperature_slider], outputs=[chatbot])\n",
        "    reset_button.click(reset, outputs=[chatbot])\n",
        "    export_button.click(export_customized, inputs=[chatbot, desc_textbox])\n",
        "\n",
        "demo.launch(debug = True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4VY6TbaBlmxL"
      },
      "source": [
        "### Check and print your result\n",
        "This part is for you to check whether your \"part3.json\" file contains all the correct contexts. After that, you can copy the result to our grading system and get your score.\n",
        "\n",
        "You should:\n",
        "1. Make sure the file list has the \"part3.json\" you want to hand over to the grading system. (If not, you can upload it using the upload button.)\n",
        "2. Hit the run button. It will appear as a **frozen** gradio interface that recreates your result.\n",
        "3. After you confirm that the result is correct, you can scroll down to the part that states: **\"Copy this part to the grading system.\"** There should be two blocks one labeled as **\"description\"**, and the other one with **\"dialogue\n",
        "\"**. Copy the corresponding value to the correct block in our grading system. Then hit \"Submit\" to get the score.\n",
        "![image.png]()\n",
        "---\n",
        "\n",
        "\n",
        "Before you run this cell, make sure you have run both **Install Packages** and **Import and Setup**.\n",
        "\n",
        "**Remember to stop this cell before you go on to the next one.**\n",
        "![image.png]() means the cell is  running, ![image.png]() means the cell is idle."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ca4uJh8jlmxS"
      },
      "outputs": [],
      "source": [
        "# loads the conversation log json file\n",
        "with open(\"part3.json\", \"r\") as f:\n",
        "    context = json.load(f)\n",
        "\n",
        "# traverse and load the conversation log properly\n",
        "chatbot = context['chatbot']\n",
        "desc = context['description']\n",
        "dialogue = \"\"\n",
        "for user, bot in chatbot:\n",
        "    dialogue += f\"User: {user}\\n\"\n",
        "    dialogue += f\"Bot: {bot}\\n\"\n",
        "\n",
        "# this part constructs the Gradio UI interface\n",
        "with gr.Blocks() as demo:\n",
        "    gr.Markdown(\"# Part3: Customized task\\nThe chatbot is able to perform a certain task. Try to interact with it!!\")\n",
        "    chatbot = gr.Chatbot(value = context['chatbot'])\n",
        "    desc_textbox = gr.Textbox(label=\"Description of the task\", value=context['description'], interactive=False)\n",
        "    with gr.Column():\n",
        "        gr.Markdown(\"# Copy this part to the grading system.\")\n",
        "        gr.Textbox(label = \"description\", value = desc, show_copy_button = True)\n",
        "        gr.Textbox(label = \"dialogue\", value = dialogue, show_copy_button = True)\n",
        "\n",
        "demo.launch(debug = True)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "langchain",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.9.19"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
