{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "# AI Travel Agent with LangChain Tutorial\n",
        "\n",
        "This notebook demonstrates how to build an AI Travel Agent using LangChain on AMD Ryzen AI PC with Lemonade Server. We'll cover:\n",
        "\n",
        "1. **Environment Setup** - Loading dependencies and API keys\n",
        "2. **LLM Configuration** - Connecting to local Lemonade Server\n",
        "3. **Tool Creation** - Building weather, search, and flight tools\n",
        "4. **Agent Workflows** - Comparing different agent approaches\n",
        "5. **Custom Agent Implementation** - Building a robust agent execution system\n",
        "\n",
        "## Architecture Overview\n",
        "\n",
        "```\n",
        "User Query → LLM Decision → Tool Selection → Tool Execution → Final Answer\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 1. Environment Setup\n",
        "\n",
        "First, let's import all necessary libraries and configure our environment.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 26,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "✅ Environment setup complete!\n"
          ]
        }
      ],
      "source": [
        "# Core imports\n",
        "import os\n",
        "import requests\n",
        "import json\n",
        "from dotenv import load_dotenv\n",
        "\n",
        "# LangChain imports\n",
        "from langchain_community.chat_models import ChatOpenAI\n",
        "from langchain_community.utilities import GoogleSerperAPIWrapper\n",
        "from langchain.tools import Tool\n",
        "from langchain_community.agent_toolkits.amadeus.toolkit import AmadeusToolkit\n",
        "from langchain_community.agent_toolkits.load_tools import load_tools\n",
        "from langchain.agents import AgentExecutor, StructuredChatAgent, create_react_agent\n",
        "from langchain import hub\n",
        "\n",
        "# Amadeus client\n",
        "from amadeus import Client\n",
        "\n",
        "# Load environment variables\n",
        "load_dotenv()\n",
        "\n",
        "print(\"✅ Environment setup complete!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 2. LLM Configuration\n",
        "\n",
        "We'll configure ChatOpenAI to connect to the local Lemonade Server, which provides an OpenAI-compatible API for local LLM inference.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "✅ LLM created: Llama-3.2-3B-Instruct-Hybrid\n",
            "\n",
            "🤖 LLM Response: I'd be happy to help you plan a trip. Can you please provide some details to get started? Here are s...\n"
          ]
        }
      ],
      "source": [
        "def create_llm(model_name: str = \"Llama-3.2-3B-Instruct-Hybrid\", temperature: float = 0.0):\n",
        "    \"\"\"\n",
        "    Create and initialize a ChatOpenAI client that connects to local Lemonade Server.\n",
        "    \n",
        "    Args:\n",
        "        model_name: The model identifier hosted on Lemonade Server\n",
        "        temperature: Sampling temperature (0.0 = deterministic)\n",
        "    \n",
        "    Returns:\n",
        "        ChatOpenAI: LLM client for generating responses\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Base URL of the local Lemonade Server's OpenAI-compatible API\n",
        "        base_url = \"http://localhost:8000/api/v0\"\n",
        "\n",
        "        # Create ChatOpenAI client with local API settings\n",
        "        llm = ChatOpenAI(\n",
        "            model_name=model_name,           # Model to use\n",
        "            temperature=temperature,         # Controls randomness\n",
        "            openai_api_base=base_url,        # Custom API endpoint\n",
        "            openai_api_key=\"none\",           # Not required for local server\n",
        "            verbose=False\n",
        "        )\n",
        "        \n",
        "        print(f\"✅ LLM created: {model_name}\")\n",
        "        return llm\n",
        "    except Exception as e:\n",
        "        print(f\"❌ Failed to create LLM: {e}\")\n",
        "        return None\n",
        "\n",
        "# Initialize the LLM\n",
        "llm = create_llm()\n",
        "\n",
        "# Test the LLM\n",
        "if llm:\n",
        "    test_response = llm.invoke(\"Hello! Can you help me plan a trip?\")\n",
        "    print(f\"\\n🤖 LLM Response: {test_response.content[:100]}...\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 3. Tool Creation\n",
        "\n",
        "Tools are the core components that allow our agent to interact with external APIs and services. Let's create tools for weather, search, and flight information.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "✅ Weather tool created: Weather Information\n",
            "\n",
            "🌤️ Weather test result:\n",
            "Current Weather in Tokyo, JP:\n",
            "- Temperature: 27.21°C\n",
            "- Conditions: Heavy Intensity Rain\n",
            "- Humidity: 73%\n",
            "- Wind Speed: 6.13 m/s...\n"
          ]
        }
      ],
      "source": [
        "def get_weather_info(location):\n",
        "    \"\"\"\n",
        "    Get current weather information using OpenWeatherMap API.\n",
        "    \n",
        "    Args:\n",
        "        location (str): City name or \"city,country\" format\n",
        "        \n",
        "    Returns:\n",
        "        str: Formatted weather information\n",
        "    \"\"\"\n",
        "    # Get API key from environment\n",
        "    api_key = os.getenv(\"OPENWEATHER_API_KEY\")\n",
        "    \n",
        "    if not api_key:\n",
        "        return \"Error: OpenWeatherMap API key not found. Please set OPENWEATHER_API_KEY in your .env file.\"\n",
        "    \n",
        "    try:\n",
        "        # API endpoint and parameters\n",
        "        base_url = \"http://api.openweathermap.org/data/2.5/weather\"\n",
        "        params = {\n",
        "            \"q\": location,\n",
        "            \"appid\": api_key,\n",
        "            \"units\": \"metric\"  # Celsius\n",
        "        }\n",
        "        \n",
        "        # Make API request\n",
        "        response = requests.get(base_url, params=params)\n",
        "        response.raise_for_status()\n",
        "        \n",
        "        # Parse response\n",
        "        data = response.json()\n",
        "        \n",
        "        # Format response\n",
        "        city = data[\"name\"]\n",
        "        country = data[\"sys\"][\"country\"]\n",
        "        temp = data[\"main\"][\"temp\"]\n",
        "        description = data[\"weather\"][0][\"description\"]\n",
        "        \n",
        "        weather_info = f\"\"\"\n",
        "Current Weather in {city}, {country}:\n",
        "- Temperature: {temp}°C\n",
        "- Conditions: {description.title()}\n",
        "- Humidity: {data[\"main\"][\"humidity\"]}%\n",
        "- Wind Speed: {data[\"wind\"][\"speed\"]} m/s\n",
        "        \"\"\"\n",
        "        \n",
        "        return weather_info.strip()\n",
        "        \n",
        "    except Exception as e:\n",
        "        return f\"Error fetching weather data: {str(e)}\"\n",
        "\n",
        "def get_weather_tools():\n",
        "    \"\"\"Create weather tool for the agent.\"\"\"\n",
        "    weather_tool = Tool(\n",
        "        name=\"Weather Information\",\n",
        "        func=get_weather_info,\n",
        "        description=\"Get current weather information for a specific location. Input should be a city name or 'city,country' format.\"\n",
        "    )\n",
        "    \n",
        "    return [weather_tool]\n",
        "\n",
        "# Test weather tool\n",
        "weather_tools = get_weather_tools()\n",
        "print(f\"✅ Weather tool created: {weather_tools[0].name}\")\n",
        "\n",
        "# Test the weather function\n",
        "test_weather = get_weather_info(\"Tokyo\")\n",
        "print(f\"\\n🌤️ Weather test result:\\n{test_weather[:150]}...\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "✅ Search tools created: 2 tools\n",
            "❌ Error initializing Amadeus tools: `AmadeusToolkit` is not fully defined; you should define `Client`, then call `AmadeusToolkit.model_rebuild()`.\n",
            "\n",
            "For further information visit https://errors.pydantic.dev/2.11/u/class-not-fully-defined\n",
            "\n",
            "🛠️ Total tools available: 3\n",
            "Tool Summary:\n",
            " 1. Weather Information: Get current weather information for a specific location. Inp...\n",
            " 2. Google Search tool: useful for when you need to ask with search...\n",
            " 3. Search: A search engine. Useful for when you need to answer question...\n"
          ]
        }
      ],
      "source": [
        "def get_google_search_tools():\n",
        "    \"\"\"Initialize Google search tools for web searches.\"\"\"\n",
        "    try:\n",
        "        # Initialize the search wrapper\n",
        "        search = GoogleSerperAPIWrapper()\n",
        "        \n",
        "        # Create custom search tool\n",
        "        google_search_tool = Tool(\n",
        "            name=\"Google Search tool\",\n",
        "            func=search.run,\n",
        "            description=\"useful for when you need to ask with search\"\n",
        "        )\n",
        "        \n",
        "        # Load additional SerpAPI tools\n",
        "        tools = [google_search_tool] + load_tools([\"serpapi\"])\n",
        "        \n",
        "        print(f\"✅ Search tools created: {len(tools)} tools\")\n",
        "        return tools\n",
        "        \n",
        "    except Exception as e:\n",
        "        print(f\"❌ Error loading search tools: {e}\")\n",
        "        return []\n",
        "\n",
        "def initialize_amadeus_tools(llm):\n",
        "    \"\"\"Initialize Amadeus toolkit for flight searches.\"\"\"\n",
        "    try:\n",
        "        # Get credentials from environment\n",
        "        client_id = os.getenv(\"AMADEUS_CLIENT_ID\")\n",
        "        client_secret = os.getenv(\"AMADEUS_CLIENT_SECRET\")\n",
        "        \n",
        "        if not client_id or not client_secret:\n",
        "            print(\"⚠️ Amadeus credentials not found. Flight tools will be limited.\")\n",
        "            return []\n",
        "        \n",
        "        # Initialize Amadeus client\n",
        "        amadeus = Client(client_id=client_id, client_secret=client_secret)\n",
        "        \n",
        "        # Create Amadeus toolkit\n",
        "        amadeus_toolkit = AmadeusToolkit(client=amadeus, llm=llm)\n",
        "        amadeus_tools = amadeus_toolkit.get_tools()\n",
        "        \n",
        "        print(f\"✅ Amadeus tools created: {len(amadeus_tools)} tools\")\n",
        "        return amadeus_tools\n",
        "        \n",
        "    except Exception as e:\n",
        "        print(f\"❌ Error initializing Amadeus tools: {e}\")\n",
        "        return []\n",
        "\n",
        "# Create all tools\n",
        "search_tools = get_google_search_tools()\n",
        "amadeus_tools = initialize_amadeus_tools(llm)\n",
        "\n",
        "# Combine all tools\n",
        "all_tools = weather_tools + search_tools + amadeus_tools\n",
        "\n",
        "print(f\"\\n🛠️ Total tools available: {len(all_tools)}\")\n",
        "print(\"Tool Summary:\")\n",
        "for i, tool in enumerate(all_tools, 1):\n",
        "    print(f\"{i:2d}. {tool.name}: {tool.description[:60]}...\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 4. Custom Agent Implementation\n",
        "\n",
        "Our custom agent implementation avoids parsing issues by directly managing tool selection and execution. This approach:\n",
        "\n",
        "1. Uses LLM to decide which tool to use\n",
        "2. Executes the selected tool  \n",
        "3. Uses LLM to generate final answer based on tool result\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "✅ Custom agent implementation ready\n"
          ]
        }
      ],
      "source": [
        "def simple_agent_execute(llm, tools, question):\n",
        "    \"\"\"\n",
        "    Custom agent implementation that directly manages tool selection and execution.\n",
        "    \n",
        "    Args:\n",
        "        llm: LangChain LLM instance\n",
        "        tools: List of available tools\n",
        "        question: User's question\n",
        "        \n",
        "    Returns:\n",
        "        dict: Execution result with tool usage and final answer\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Create tool mapping for quick lookup\n",
        "        tool_map = {tool.name: tool for tool in tools}\n",
        "        tool_names = \", \".join([tool.name for tool in tools])\n",
        "        \n",
        "        # Step 1: LLM decides which tool to use\n",
        "        decision_prompt = f\"\"\"\n",
        "You are a travel agent assistant with access to these tools:\n",
        "{tool_names}\n",
        "\n",
        "For the question: \"{question}\"\n",
        "\n",
        "Decide which tool to use and provide the input. Respond exactly as:\n",
        "TOOL: [tool_name]\n",
        "INPUT: [tool_input]\n",
        "\n",
        "If no tool is needed:\n",
        "TOOL: none\n",
        "INPUT: none\n",
        "\n",
        "Available tools:\n",
        "- \"Weather Information\": for weather queries (input: city name)\n",
        "- \"Google Search tool\": for general info and flight prices (input: search query)\n",
        "- \"closest_airport\": for finding airports (input: city/location)\n",
        "- \"single_flight_search\": for flight searches using Amadeus API\n",
        "\n",
        "Question: {question}\n",
        "\"\"\"\n",
        "        \n",
        "        # Get LLM decision\n",
        "        response = llm.invoke(decision_prompt)\n",
        "        decision = response.content if hasattr(response, 'content') else str(response)\n",
        "        \n",
        "        # Step 2: Parse the decision\n",
        "        lines = decision.strip().split('\\n')\n",
        "        tool_name = None\n",
        "        tool_input = None\n",
        "        \n",
        "        for line in lines:\n",
        "            if line.startswith('TOOL:'):\n",
        "                tool_name = line.replace('TOOL:', '').strip()\n",
        "            elif line.startswith('INPUT:'):\n",
        "                tool_input = line.replace('INPUT:', '').strip()\n",
        "        \n",
        "        print(f\"🎯 Agent Decision: Tool='{tool_name}', Input='{tool_input}'\")\n",
        "        \n",
        "        # Step 3: Execute the tool if needed\n",
        "        if tool_name and tool_name != 'none' and tool_name in tool_map:\n",
        "            tool = tool_map[tool_name]\n",
        "            \n",
        "            try:\n",
        "                # Handle different tool types (compatibility layer)\n",
        "                if hasattr(tool, 'func'):\n",
        "                    tool_result = tool.func(tool_input)\n",
        "                elif hasattr(tool, '_run'):\n",
        "                    tool_result = tool._run(tool_input)\n",
        "                elif hasattr(tool, 'run'):\n",
        "                    tool_result = tool.run(tool_input)\n",
        "                else:\n",
        "                    tool_result = tool(tool_input)\n",
        "                    \n",
        "                print(f\"🔧 Tool executed successfully\")\n",
        "                \n",
        "            except Exception as tool_error:\n",
        "                tool_result = f\"Error executing tool {tool_name}: {str(tool_error)}\"\n",
        "                print(f\"❌ Tool execution error: {tool_error}\")\n",
        "            \n",
        "            # Step 4: Generate final answer based on tool result\n",
        "            final_prompt = f\"\"\"\n",
        "Based on the tool result below, provide a helpful and detailed answer to the user's question.\n",
        "\n",
        "User Question: {question}\n",
        "Tool Used: {tool_name}\n",
        "Tool Result: {tool_result}\n",
        "\n",
        "Provide a comprehensive answer:\n",
        "\"\"\"\n",
        "            \n",
        "            final_response = llm.invoke(final_prompt)\n",
        "            final_answer = final_response.content if hasattr(final_response, 'content') else str(final_response)\n",
        "            \n",
        "            return {\n",
        "                'tool_used': tool_name,\n",
        "                'tool_input': tool_input,\n",
        "                'tool_result': tool_result,\n",
        "                'final_answer': final_answer\n",
        "            }\n",
        "        else:\n",
        "            # No tool needed, direct answer\n",
        "            direct_prompt = f\"Answer the following travel-related question directly:\\n\\n{question}\"\n",
        "            direct_response = llm.invoke(direct_prompt)\n",
        "            direct_answer = direct_response.content if hasattr(direct_response, 'content') else str(direct_response)\n",
        "            \n",
        "            return {\n",
        "                'tool_used': 'none',\n",
        "                'tool_input': 'none',\n",
        "                'tool_result': 'none',\n",
        "                'final_answer': direct_answer\n",
        "            }\n",
        "            \n",
        "    except Exception as e:\n",
        "        return {\n",
        "            'tool_used': 'error',\n",
        "            'tool_input': 'error',\n",
        "            'tool_result': str(e),\n",
        "            'final_answer': f\"Sorry, I encountered an error: {str(e)}\"\n",
        "        }\n",
        "\n",
        "print(\"✅ Custom agent implementation ready\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 5. Agent Testing\n",
        "\n",
        "Let's test our agent with different types of travel queries to see how it selects and uses tools.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 25,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "============================================================\n",
            "🤔 Query: What's the weather like in Tokyo today?\n",
            "============================================================\n",
            "🎯 Agent Decision: Tool='Weather Information', Input='Tokyo'\n",
            "🔧 Tool executed successfully\n",
            "\n",
            "🔧 Tool Usage:\n",
            "   Tool: Weather Information\n",
            "   Input: Tokyo\n",
            "   Result: Current Weather in Tokyo, JP:\n",
            "- Temperature: 27.21°C\n",
            "- Conditions: Heavy Intensity Rain\n",
            "- Humidity: 73%\n",
            "- Wind Speed: 6.13 m/s...\n",
            "\n",
            "🎯 Final Answer:\n",
            "   Based on the weather information provided by the tool, here's a detailed answer to the user's question:\n",
            "\n",
            "**Current Weather in Tokyo, Japan**\n",
            "\n",
            "As of the current time, Tokyo is experiencing heavy intensity rain, with a temperature of 27.21°C. This is a relatively warm temperature for Tokyo, especially...\n"
          ]
        }
      ],
      "source": [
        "def test_agent(query):\n",
        "    \"\"\"Test the agent with a specific query and display results.\"\"\"\n",
        "    print(f\"\\n{'='*60}\")\n",
        "    print(f\"🤔 Query: {query}\")\n",
        "    print(f\"{'='*60}\")\n",
        "    \n",
        "    result = simple_agent_execute(llm, all_tools, query)\n",
        "    \n",
        "    if result['tool_used'] != 'none' and result['tool_used'] != 'error':\n",
        "        print(f\"\\n🔧 Tool Usage:\")\n",
        "        print(f\"   Tool: {result['tool_used']}\")\n",
        "        print(f\"   Input: {result['tool_input']}\")\n",
        "        print(f\"   Result: {result['tool_result'][:200]}...\")  # Truncate for display\n",
        "    \n",
        "    print(f\"\\n🎯 Final Answer:\")\n",
        "    print(f\"   {result['final_answer'][:300]}...\")  # Truncate for display\n",
        "    \n",
        "    return result\n",
        "\n",
        "# Test with different query types\n",
        "test_queries = [\n",
        "    \"What's the weather like in Tokyo today?\",\n",
        "    \"What are the best places to visit in Paris?\",\n",
        "    \"What's the best time to visit Bali?\"\n",
        "]\n",
        "\n",
        "# Test weather query\n",
        "weather_result = test_agent(test_queries[0])\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "vscode": {
          "languageId": "raw"
        }
      },
      "source": [
        "## 6. Key Concepts Summary\n",
        "\n",
        "### LangChain Agent Components:\n",
        "\n",
        "1. **LLM (Large Language Model)**: The brain that makes decisions and generates responses\n",
        "2. **Tools**: External APIs and services (weather, search, flights)\n",
        "3. **Agent**: Orchestrates between LLM and tools\n",
        "4. **Executor**: Manages the execution flow and error handling\n",
        "\n",
        "### Agent Workflow Patterns:\n",
        "\n",
        "1. **ReAct Pattern**: Thought → Action → Observation → Repeat\n",
        "2. **Custom Pattern**: Decision → Tool Execution → Final Answer\n",
        "\n",
        "### Best Practices:\n",
        "\n",
        "- ✅ Use environment variables for API keys\n",
        "- ✅ Implement proper error handling\n",
        "- ✅ Create compatibility layers for different tool types\n",
        "- ✅ Provide clear tool descriptions\n",
        "- ✅ Test with various query types\n",
        "\n",
        "### Tool Design Principles:\n",
        "\n",
        "- **Single Responsibility**: Each tool has a specific purpose\n",
        "- **Clear Interface**: Well-defined inputs and outputs\n",
        "- **Error Handling**: Graceful failure with helpful messages\n",
        "- **Documentation**: Clear descriptions for the LLM to understand\n",
        "\n",
        "## 7. Next Steps\n",
        "\n",
        "To extend this agent, you could:\n",
        "\n",
        "1. **Add More Tools**: Hotel booking, restaurant recommendations, etc.\n",
        "2. **Improve Prompts**: Better tool selection and response formatting\n",
        "3. **Memory System**: Remember conversation context\n",
        "4. **Multi-step Planning**: Break complex queries into sub-tasks\n",
        "5. **Streaming Responses**: Real-time response generation\n",
        "\n",
        "### Deployment Options:\n",
        "\n",
        "- **Streamlit**: Web interface (as shown in `AI_Travel_Agent_streamlit.py`)\n",
        "- **FastAPI**: REST API service\n",
        "- **Gradio**: Quick prototyping interface\n",
        "- **Discord/Slack Bot**: Chat platform integration\n",
        "\n",
        "Happy building! 🚀\n"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "ryzenai-llm",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.16"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
