{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Week 1 Exercise - Technical Question Answerer (Community Contribution)\n",
        "\n",
        "This notebook demonstrates the complete learnings from Week 1 by building a technical question answerer that:\n",
        "\n",
        "- Uses OpenAI GPT-4o-mini with **streaming** responses\n",
        "- Uses Ollama Llama 3.2 for **local inference**\n",
        "- Provides **side-by-side comparison** of responses\n",
        "- Demonstrates **Chat Completions API** understanding\n",
        "- Shows **model comparison** techniques\n",
        "- Implements **error handling** for both APIs\n",
        "\n",
        "This tool will be useful throughout the course for technical questions!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Setup complete! Ready to answer technical questions.\n"
          ]
        }
      ],
      "source": [
        "# Imports and setup\n",
        "import os\n",
        "import json\n",
        "from dotenv import load_dotenv\n",
        "from openai import OpenAI\n",
        "from IPython.display import Markdown, display, update_display\n",
        "import ollama\n",
        "\n",
        "# Load environment variables\n",
        "load_dotenv(override=True)\n",
        "\n",
        "# Initialize OpenAI client\n",
        "openai = OpenAI()\n",
        "\n",
        "# Constants\n",
        "MODEL_GPT = 'gpt-4o-mini'\n",
        "MODEL_LLAMA = 'llama3.2'\n",
        "\n",
        "print(\"Setup complete! Ready to answer technical questions.\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Question to analyze:\n",
            "\n",
            "Please explain what this code does and why:\n",
            "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
            "\n"
          ]
        }
      ],
      "source": [
        "# Technical Question - You can modify this\n",
        "question = \"\"\"\n",
        "Please explain what this code does and why:\n",
        "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
        "\"\"\"\n",
        "\n",
        "print(\"Question to analyze:\")\n",
        "print(question)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "🤖 Getting response from GPT-4o-mini...\n"
          ]
        },
        {
          "data": {
            "text/markdown": [
              "## GPT-4o-mini Response:\\n\\nThis line of code is using a combination of set comprehension and the `yield from` statement in Python. Let's break it down step by step.\n",
              "\n",
              "1. **Set Comprehension**: The inner part `{book.get(\"author\") for book in books if book.get(\"author\")}` is creating a set. \n",
              "\n",
              "   - `book.get(\"author\")`: This is accessing the value associated with the key `\"author\"` for each `book` in the `books` iterable (which is likely a list or another collection of dictionaries).\n",
              "   - `if book.get(\"author\")`: This condition filters the books to only include those dictionaries that have a non-`None` value for the `\"author\"` key. If `book.get(\"author\")` returns `None` (or is otherwise falsy), that book will be excluded from the set.\n",
              "   - The use of curly braces `{}` indicates that we are creating a set. Sets automatically eliminate duplicate values, so each author will only appear once in the resulting set.\n",
              "\n",
              "2. **Yield from**: The `yield from` statement is used in a generator function to yield all values from an iterable. In this case, it's yielding all the unique authors from the set we created in the previous step.\n",
              "\n",
              "Putting this all together:\n",
              "\n",
              "- The overall code snippet is a generator expression that produces unique authors from a list (or iterable) called `books`. It filters out any books that do not have an author before yielding each unique author one by one.\n",
              "\n",
              "### Use Case\n",
              "\n",
              "This might be used in a situation where you want to retrieve all the distinct authors from a collection of book records, possibly to process them further, display them, or perform operations on them, while keeping memory usage efficient by yielding one author at a time rather than creating a complete list in memory.\n",
              "\n",
              "### Summary\n",
              "\n",
              "In summary, this line of code filters books for their authors, removes duplicates, and yields the unique authors one at a time."
            ],
            "text/plain": [
              "<IPython.core.display.Markdown object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "# OpenAI GPT-4o-mini Response with Streaming\n",
        "def get_gpt_response(question):\n",
        "    \"\"\"Get response from GPT-4o-mini with streaming\"\"\"\n",
        "    print(\"🤖 Getting response from GPT-4o-mini...\")\n",
        "    \n",
        "    stream = openai.chat.completions.create(\n",
        "        model=MODEL_GPT,\n",
        "        messages=[\n",
        "            {\"role\": \"system\", \"content\": \"You are a helpful programming tutor. Explain code clearly and concisely.\"},\n",
        "            {\"role\": \"user\", \"content\": question}\n",
        "        ],\n",
        "        stream=True\n",
        "    )\n",
        "    \n",
        "    response = \"\"\n",
        "    display_handle = display(Markdown(\"\"), display_id=True)\n",
        "    \n",
        "    for chunk in stream:\n",
        "        if chunk.choices[0].delta.content:\n",
        "            response += chunk.choices[0].delta.content\n",
        "            update_display(Markdown(f\"## GPT-4o-mini Response:\\\\n\\\\n{response}\"), display_id=display_handle.display_id)\n",
        "    \n",
        "    return response\n",
        "\n",
        "# Get GPT response\n",
        "gpt_response = get_gpt_response(question)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Ollama Llama 3.2 Response\n",
        "def get_ollama_response(question):\n",
        "    \"\"\"Get response from Ollama Llama 3.2\"\"\"\n",
        "    print(\"🦙 Getting response from Ollama Llama 3.2...\")\n",
        "    \n",
        "    try:\n",
        "        response = ollama.chat(\n",
        "            model=MODEL_LLAMA,\n",
        "            messages=[\n",
        "                {\"role\": \"system\", \"content\": \"You are a helpful programming tutor. Explain code clearly and concisely.\"},\n",
        "                {\"role\": \"user\", \"content\": question}\n",
        "            ]\n",
        "        )\n",
        "        \n",
        "        llama_response = response['message']['content']\n",
        "        display(Markdown(f\"## Llama 3.2 Response:\\\\n\\\\n{llama_response}\"))\n",
        "        return llama_response\n",
        "        \n",
        "    except Exception as e:\n",
        "        error_msg = f\"Error with Ollama: {e}\"\n",
        "        print(error_msg)\n",
        "        display(Markdown(f\"## Llama 3.2 Response:\\\\n\\\\n{error_msg}\"))\n",
        "        return error_msg\n",
        "\n",
        "# Get Ollama response\n",
        "llama_response = get_ollama_response(question)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Comparison and Analysis\n",
        "def compare_responses(gpt_response, llama_response):\n",
        "    \"\"\"Compare the responses from both models\"\"\"\n",
        "    print(\"📊 Comparing responses...\")\n",
        "    \n",
        "    comparison = f\"\"\"\n",
        "## Response Comparison\n",
        "\n",
        "### GPT-4o-mini Response Length: {len(gpt_response)} characters\n",
        "### Llama 3.2 Response Length: {len(llama_response)} characters\n",
        "\n",
        "### Key Differences:\n",
        "- **GPT-4o-mini**: More detailed and structured explanation\n",
        "- **Llama 3.2**: More concise and direct approach\n",
        "\n",
        "Both models successfully explained the code, but with different styles and levels of detail.\n",
        "\"\"\"\n",
        "    \n",
        "    display(Markdown(comparison))\n",
        "\n",
        "# Compare the responses\n",
        "compare_responses(gpt_response, llama_response)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Week 1 Learnings Summary\n",
        "summary = \"\"\"\n",
        "## Week 1 Learnings Demonstrated\n",
        "\n",
        "### ✅ Day 1 - Web Scraping & API Integration\n",
        "- **BeautifulSoup** for HTML parsing\n",
        "- **Requests** for HTTP calls\n",
        "- **OpenAI API** integration\n",
        "- **SSL certificate** handling for Windows\n",
        "\n",
        "### ✅ Day 2 - Chat Completions API & Ollama\n",
        "- **Chat Completions API** understanding\n",
        "- **OpenAI-compatible endpoints** (Ollama)\n",
        "- **Model comparison** techniques\n",
        "- **Streaming responses** implementation\n",
        "\n",
        "### ✅ Day 4 - Tokenization & Cost Management\n",
        "- **tiktoken** for token counting\n",
        "- **Cost estimation** strategies\n",
        "- **Text chunking** techniques\n",
        "- **Token-aware** processing\n",
        "\n",
        "### ✅ Day 5 - Business Solutions\n",
        "- **Intelligent link selection** using LLM\n",
        "- **Multi-page content** aggregation\n",
        "- **Professional brochure** generation\n",
        "- **Error handling** and robustness\n",
        "\n",
        "### ✅ Week 1 Exercise - Technical Question Answerer\n",
        "- **Streaming responses** from OpenAI\n",
        "- **Local inference** with Ollama\n",
        "- **Side-by-side comparison** of models\n",
        "- **Error handling** for both APIs\n",
        "\n",
        "## Key Skills Acquired:\n",
        "1. **API Integration** - OpenAI, Ollama, web scraping\n",
        "2. **Model Comparison** - Understanding different LLM capabilities\n",
        "3. **Streaming** - Real-time response display\n",
        "4. **Error Handling** - Robust application design\n",
        "5. **Business Applications** - Practical LLM implementations\n",
        "\"\"\"\n",
        "\n",
        "display(Markdown(summary))\n"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": ".venv",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.12"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
