{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# 1. Installing the dependencies"
      ],
      "metadata": {
        "id": "gE8aJtAf2RO9"
      }
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "collapsed": true,
        "id": "J8R4LfnSda8E",
        "outputId": "618c9ce7-0b0b-414b-a575-9b7da86d2f02"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Collecting mirascope[groq]\n",
            "  Downloading mirascope-1.25.4-py3-none-any.whl.metadata (8.5 kB)\n",
            "Requirement already satisfied: docstring-parser<1.0,>=0.15 in /usr/local/lib/python3.11/dist-packages (from mirascope[groq]) (0.16)\n",
            "Requirement already satisfied: jiter>=0.5.0 in /usr/local/lib/python3.11/dist-packages (from mirascope[groq]) (0.10.0)\n",
            "Requirement already satisfied: pydantic<3.0,>=2.7.4 in /usr/local/lib/python3.11/dist-packages (from mirascope[groq]) (2.11.7)\n",
            "Requirement already satisfied: typing-extensions>=4.10.0 in /usr/local/lib/python3.11/dist-packages (from mirascope[groq]) (4.14.1)\n",
            "Collecting groq<1,>=0.9.0 (from mirascope[groq])\n",
            "  Downloading groq-0.30.0-py3-none-any.whl.metadata (16 kB)\n",
            "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.11/dist-packages (from groq<1,>=0.9.0->mirascope[groq]) (4.9.0)\n",
            "Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.11/dist-packages (from groq<1,>=0.9.0->mirascope[groq]) (1.9.0)\n",
            "Requirement already satisfied: httpx<1,>=0.23.0 in /usr/local/lib/python3.11/dist-packages (from groq<1,>=0.9.0->mirascope[groq]) (0.28.1)\n",
            "Requirement already satisfied: sniffio in /usr/local/lib/python3.11/dist-packages (from groq<1,>=0.9.0->mirascope[groq]) (1.3.1)\n",
            "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<3.0,>=2.7.4->mirascope[groq]) (0.7.0)\n",
            "Requirement already satisfied: pydantic-core==2.33.2 in /usr/local/lib/python3.11/dist-packages (from pydantic<3.0,>=2.7.4->mirascope[groq]) (2.33.2)\n",
            "Requirement already satisfied: typing-inspection>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<3.0,>=2.7.4->mirascope[groq]) (0.4.1)\n",
            "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.11/dist-packages (from anyio<5,>=3.5.0->groq<1,>=0.9.0->mirascope[groq]) (3.10)\n",
            "Requirement already satisfied: certifi in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->groq<1,>=0.9.0->mirascope[groq]) (2025.7.9)\n",
            "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->groq<1,>=0.9.0->mirascope[groq]) (1.0.9)\n",
            "Requirement already satisfied: h11>=0.16 in /usr/local/lib/python3.11/dist-packages (from httpcore==1.*->httpx<1,>=0.23.0->groq<1,>=0.9.0->mirascope[groq]) (0.16.0)\n",
            "Downloading groq-0.30.0-py3-none-any.whl (131 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m131.1/131.1 kB\u001b[0m \u001b[31m3.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading mirascope-1.25.4-py3-none-any.whl (373 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m373.2/373.2 kB\u001b[0m \u001b[31m16.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hInstalling collected packages: mirascope, groq\n",
            "Successfully installed groq-0.30.0 mirascope-1.25.4\n",
            "Collecting datetime\n",
            "  Downloading DateTime-5.5-py3-none-any.whl.metadata (33 kB)\n",
            "Collecting zope.interface (from datetime)\n",
            "  Downloading zope.interface-7.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (44 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m44.4/44.4 kB\u001b[0m \u001b[31m2.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: pytz in /usr/local/lib/python3.11/dist-packages (from datetime) (2025.2)\n",
            "Requirement already satisfied: setuptools in /usr/local/lib/python3.11/dist-packages (from zope.interface->datetime) (75.2.0)\n",
            "Downloading DateTime-5.5-py3-none-any.whl (52 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m52.6/52.6 kB\u001b[0m \u001b[31m3.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading zope.interface-7.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (259 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m259.8/259.8 kB\u001b[0m \u001b[31m9.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hInstalling collected packages: zope.interface, datetime\n",
            "Successfully installed datetime-5.5 zope.interface-7.2\n"
          ]
        }
      ],
      "source": [
        "!pip install \"mirascope[groq]\"\n",
        "!pip install datetime"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Groq API Key\n",
        "For this tutorial, we require a Groq API key to make LLM calls. You can get one at https://console.groq.com/keys"
      ],
      "metadata": {
        "id": "mFf4YcMr2VEV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "from getpass import getpass\n",
        "os.environ['GROQ_API_KEY'] = getpass('Enter Groq API Key: ')"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "10Rsw0MQd0_P",
        "outputId": "b2063009-c690-450f-dab8-4bbcfde52e49"
      },
      "execution_count": 3,
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Enter Groq API Key: ··········\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 2. Importing the libraries & defining a Pydantic schema\n",
        "This section imports the required libraries and defines a COTResult Pydantic model. The schema structures each reasoning step with a title, content, and a next_action flag to indicate whether the model should continue reasoning or return the final answer."
      ],
      "metadata": {
        "id": "98e31BFL2iMU"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from typing import Literal\n",
        "\n",
        "from mirascope.core import groq\n",
        "from pydantic import BaseModel, Field\n",
        "\n",
        "\n",
        "history: list[dict] = []\n",
        "\n",
        "\n",
        "class COTResult(BaseModel):\n",
        "    title: str = Field(..., desecription=\"The title of the step\")\n",
        "    content: str = Field(..., description=\"The output content of the step\")\n",
        "    next_action: Literal[\"continue\", \"final_answer\"] = Field(\n",
        "        ..., description=\"The next action to take\"\n",
        "    )"
      ],
      "metadata": {
        "id": "dFy7K5gZd3jZ"
      },
      "execution_count": 8,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 3. Defining Step-wise Reasoning and Final Answer Functions\n",
        "These functions form the core of the Chain-of-Thought (CoT) reasoning workflow. The cot_step function allows the model to think iteratively by reviewing prior steps and deciding whether to continue or conclude. This enables deeper reasoning, especially for multi-step problems. The final_answer function consolidates all reasoning into a single, focused response, making the output clean and ready for end-user consumption. Together, they help the model approach complex tasks more logically and transparently.\n"
      ],
      "metadata": {
        "id": "s_DAG6jA24Id"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "@groq.call(\"llama-3.3-70b-versatile\", json_mode=True, response_model=COTResult)\n",
        "def cot_step(prompt: str, step_number: int, previous_steps: str) -> str:\n",
        "    return f\"\"\"\n",
        "    You are an expert AI assistant that explains your reasoning step by step.\n",
        "    For this step, provide a title that describes what you're doing, along with the content.\n",
        "    Decide if you need another step or if you're ready to give the final answer.\n",
        "\n",
        "    Guidelines:\n",
        "    - Use AT MOST 5 steps to derive the answer.\n",
        "    - Be aware of your limitations as an LLM and what you can and cannot do.\n",
        "    - In your reasoning, include exploration of alternative answers.\n",
        "    - Consider you may be wrong, and if you are wrong in your reasoning, where it would be.\n",
        "    - Fully test all other possibilities.\n",
        "    - YOU ARE ALLOWED TO BE WRONG. When you say you are re-examining\n",
        "        - Actually re-examine, and use another approach to do so.\n",
        "        - Do not just say you are re-examining.\n",
        "\n",
        "    IMPORTANT: Do not use code blocks or programming examples in your reasoning. Explain your process in plain language.\n",
        "\n",
        "    This is step number {step_number}.\n",
        "\n",
        "    Question: {prompt}\n",
        "\n",
        "    Previous steps:\n",
        "    {previous_steps}\n",
        "    \"\"\"\n",
        "\n",
        "\n",
        "@groq.call(\"llama-3.3-70b-versatile\")\n",
        "def final_answer(prompt: str, reasoning: str) -> str:\n",
        "    return f\"\"\"\n",
        "    Based on the following chain of reasoning, provide a final answer to the question.\n",
        "    Only provide the text response without any titles or preambles.\n",
        "    Retain any formatting as instructed by the original prompt, such as exact formatting for free response or multiple choice.\n",
        "\n",
        "    Question: {prompt}\n",
        "\n",
        "    Reasoning:\n",
        "    {reasoning}\n",
        "\n",
        "    Final Answer:\n",
        "    \"\"\""
      ],
      "metadata": {
        "id": "my_VTyS0fEzy"
      },
      "execution_count": 9,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 4. Generating and Displaying Chain-of-Thought Responses\n",
        "This section defines two key functions to manage the full Chain-of-Thought reasoning loop:\n",
        "\n",
        "* generate_cot_response handles the iterative reasoning process. It sends the\n",
        "user query to the model step-by-step, tracks each step’s content, title, and response time, and stops when the model signals it has reached the final answer or after a maximum of 5 steps. It then calls final_answer to produce a clear conclusion based on the accumulated reasoning.\n",
        "\n",
        "* display_cot_response neatly prints the step-by-step breakdown along with the time taken for each step, followed by the final answer and the total processing time.\n",
        "\n",
        "Together, these functions help visualize how the model reasons through a complex prompt and allow for better transparency and debugging of multi-step outputs."
      ],
      "metadata": {
        "id": "0itQSpp-3ELh"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def generate_cot_response(\n",
        "    user_query: str,\n",
        ") -> tuple[list[tuple[str, str, float]], float]:\n",
        "    steps: list[tuple[str, str, float]] = []\n",
        "    total_thinking_time: float = 0.0\n",
        "    step_count: int = 1\n",
        "    reasoning: str = \"\"\n",
        "    previous_steps: str = \"\"\n",
        "\n",
        "    while True:\n",
        "        start_time: datetime = datetime.now()\n",
        "        cot_result = cot_step(user_query, step_count, previous_steps)\n",
        "        end_time: datetime = datetime.now()\n",
        "        thinking_time: float = (end_time - start_time).total_seconds()\n",
        "\n",
        "        steps.append(\n",
        "            (\n",
        "                f\"Step {step_count}: {cot_result.title}\",\n",
        "                cot_result.content,\n",
        "                thinking_time,\n",
        "            )\n",
        "        )\n",
        "        total_thinking_time += thinking_time\n",
        "\n",
        "        reasoning += f\"\\n{cot_result.content}\\n\"\n",
        "        previous_steps += f\"\\n{cot_result.content}\\n\"\n",
        "\n",
        "        if cot_result.next_action == \"final_answer\" or step_count >= 5:\n",
        "            break\n",
        "\n",
        "        step_count += 1\n",
        "\n",
        "    # Generate final answer\n",
        "    start_time = datetime.now()\n",
        "    final_result: str = final_answer(user_query, reasoning).content\n",
        "    end_time = datetime.now()\n",
        "    thinking_time = (end_time - start_time).total_seconds()\n",
        "    total_thinking_time += thinking_time\n",
        "\n",
        "    steps.append((\"Final Answer\", final_result, thinking_time))\n",
        "\n",
        "    return steps, total_thinking_time\n",
        "\n",
        "\n",
        "def display_cot_response(\n",
        "    steps: list[tuple[str, str, float]], total_thinking_time: float\n",
        ") -> None:\n",
        "    for title, content, thinking_time in steps:\n",
        "        print(f\"{title}:\")\n",
        "        print(content.strip())\n",
        "        print(f\"**Thinking time: {thinking_time:.2f} seconds**\\n\")\n",
        "\n",
        "    print(f\"**Total thinking time: {total_thinking_time:.2f} seconds**\")\n"
      ],
      "metadata": {
        "id": "R4DcEA-WfNEY"
      },
      "execution_count": 10,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 5. Running the Chain-of-Thought Workflow\n",
        "The run function initiates the full Chain-of-Thought (CoT) reasoning process by sending a multi-step math word problem to the model. It begins by printing the user’s question, then uses generate_cot_response to compute a step-by-step reasoning trace. These steps, along with the total processing time, are displayed using display_cot_response.\n",
        "\n",
        "Finally, the function logs both the question and the model’s final answer into a shared history list, preserving the full interaction for future reference or auditing. This function ties together all earlier components into a complete, user-facing reasoning flow."
      ],
      "metadata": {
        "id": "5acz364p3RXF"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def run() -> None:\n",
        "    question: str = \"If a train leaves City A at 9:00 AM traveling at 60 km/h, and another train leaves City B (which is 300 km away from City A) at 10:00 AM traveling at 90 km/h toward City A, at what time will the trains meet?\"\n",
        "    print(\"(User):\", question)\n",
        "    # Generate COT response\n",
        "    steps, total_thinking_time = generate_cot_response(question)\n",
        "    display_cot_response(steps, total_thinking_time)\n",
        "\n",
        "    # Add the interaction to the history\n",
        "    history.append({\"role\": \"user\", \"content\": question})\n",
        "    history.append(\n",
        "        {\"role\": \"assistant\", \"content\": steps[-1][1]}\n",
        "    )  # Add only the final answer to the history\n",
        "\n",
        "\n",
        "# Run the function\n",
        "\n",
        "run()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ohsyDL9wfRj8",
        "outputId": "fb7a5e25-f5d0-45fc-d4b9-364eb61a89b3"
      },
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "(User): If a train leaves City A at 9:00 AM traveling at 60 km/h, and another train leaves City B (which is 300 km away from City A) at 10:00 AM traveling at 90 km/h toward City A, at what time will the trains meet?\n",
            "Step 1: Step 1: Understand the Problem and Identify Key Elements:\n",
            "To find the time when the two trains will meet, we need to consider the distance between them, their speeds, and the time difference in their departures. The first train travels at 60 km/h and leaves at 9:00 AM, while the second train travels at 90 km/h and leaves at 10:00 AM. The distance between City A and City B is 300 km. We will calculate the distance covered by the first train during the one-hour head start and then determine the combined speed of the two trains when moving towards each other.\n",
            "**Thinking time: 0.80 seconds**\n",
            "\n",
            "Step 2: Calculating Distance Covered and Combined Speed:\n",
            "The first train travels at 60 km/h and has a one-hour head start. So, in the first hour, it covers 60 km. When the second train starts, the distance between them is 300 - 60 = 240 km. Their combined speed when moving towards each other is 60 km/h + 90 km/h = 150 km/h. To find out how long it will take for them to meet, we need to divide the remaining distance by their combined speed.\n",
            "**Thinking time: 0.83 seconds**\n",
            "\n",
            "Step 3: Calculating the Meeting Time of the Two Trains:\n",
            "To find the time when the two trains will meet, we divide the remaining distance by their combined speed. The remaining distance is 240 km, and their combined speed is 150 km/h. So, the time it takes for them to meet after the second train starts is 240 km / 150 km/h = 1.6 hours. Since the second train starts at 10:00 AM, we need to add the travel time to this start time. 1.6 hours is equivalent to 1 hour and 36 minutes. Therefore, adding this to 10:00 AM gives us the meeting time.\n",
            "**Thinking time: 0.76 seconds**\n",
            "\n",
            "Final Answer:\n",
            "11:36 AM\n",
            "**Thinking time: 0.45 seconds**\n",
            "\n",
            "**Total thinking time: 2.84 seconds**\n"
          ]
        }
      ]
    }
  ]
}