Spaces:
Sleeping
Sleeping
| import streamlit as st | |
| st.markdown(""" | |
| **Weather agent** | |
| Example of PydanticAI with `multiple tools` which the LLM needs to call in turn to answer a question. | |
| """) | |
| with st.expander("🎯 Objectives"): | |
| st.markdown(""" | |
| - Use **OpenAI GPT-4o-mini** agent to `process natural language queries` about the weather. | |
| - Fetch **geolocation** from a location string using the `Maps.co API`. | |
| - Retrieve **real-time weather** using the Tomorrow.io API. | |
| - Handle `retries`, `backoff`, and `logging` using **Logfire**. | |
| - Integrate all parts in a clean, async-compatible **Streamlit UI**. | |
| - Ensuring `concise` and `structured` responses. | |
| """) | |
| with st.expander("🧰 Pre-requisites"): | |
| st.markdown(""" | |
| - Python 3.10+ | |
| - Streamlit | |
| - AsyncClient (httpx) | |
| - OpenAI `pydantic_ai` Agent | |
| - Logfire for tracing/debugging | |
| - Valid API Keys: | |
| - [https://geocode.maps.co/](https://geocode.maps.co/) | |
| - [https://www.tomorrow.io/](https://www.tomorrow.io/) | |
| """) | |
| st.code(""" | |
| pip install streamlit httpx logfire pydantic_ai | |
| """) | |
| with st.expander("⚙️ Step-by-Step Setup"): | |
| st.markdown("**Imports and Global Client**") | |
| st.code(""" | |
| import os | |
| import asyncio | |
| import streamlit as st | |
| from dataclasses import dataclass | |
| from typing import Any | |
| import logfire | |
| from httpx import AsyncClient | |
| from pydantic_ai import Agent, RunContext, ModelRetry | |
| logfire.configure(send_to_logfire='if-token-present') | |
| client = AsyncClient() | |
| """) | |
| st.markdown("**Declare Dependencies**") | |
| st.code(""" | |
| @dataclass | |
| class Deps: | |
| client: AsyncClient # client is an instance of AsyncClient (from httpx). | |
| weather_api_key: str | None | |
| geo_api_key: str | None | |
| """) | |
| st.markdown("**Setup Weather Agent**") | |
| st.code(""" | |
| weather_agent = Agent( | |
| 'openai:gpt-4o-mini', | |
| system_prompt=( | |
| 'Be concise, reply with one sentence. ' | |
| 'Use the `get_lat_lng` tool to get the latitude and longitude of the locations, ' | |
| 'then use the `get_weather` tool to get the weather.' | |
| ), | |
| deps_type= Deps, | |
| retries = 2, | |
| ) | |
| """) | |
| st.markdown("**Define Geocoding Tool with Retry**") | |
| st.code(""" | |
| @weather_agent.tool | |
| async def get_lat_lng(ctx: RunContext[Deps], | |
| location_description: str, | |
| max_retries: int = 5, | |
| base_delay: int = 2) -> dict[str, float]: | |
| "Get the latitude and longitude of a location with retry handling for rate limits." | |
| if ctx.deps.geo_api_key is None: | |
| return {'lat': 51.1, 'lng': -0.1} # Default to London | |
| # Sets up API request parameters. | |
| params = {'q': location_description, 'api_key': ctx.deps.geo_api_key} | |
| # Loops for a maximum number of retries. | |
| for attempt in range(max_retries): | |
| try: | |
| # Logs API call span with parameters. | |
| with logfire.span('calling geocode API', params=params) as span: | |
| # Sends async GET request. | |
| r = await ctx.deps.client.get('https://geocode.maps.co/search', params=params) | |
| # Checks if API rate limit is exceeded. | |
| if r.status_code == 429: | |
| # Exponential backoff | |
| wait_time = base_delay * (2 ** attempt) | |
| # Waits before retrying. | |
| await asyncio.sleep(wait_time) | |
| # Continues to the next retry attempt. | |
| continue | |
| r.raise_for_status() | |
| data = r.json() | |
| span.set_attribute('response', data) | |
| if data: | |
| # Extracts and returns latitude & longitude. | |
| return {'lat': float(data[0]['lat']), 'lng': float(data[0]['lon'])} | |
| else: | |
| # Raises an error if no valid data is found. | |
| raise ModelRetry('Could not find the location') | |
| except Exception as e: # Catches HTTP errors. | |
| print(f"Request failed: {e}") # Logs the failure. | |
| raise ModelRetry('Failed after multiple retries') | |
| """) | |
| st.markdown("**Define Weather Tool**") | |
| st.code(""" | |
| @weather_agent.tool | |
| async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str, Any]: | |
| if ctx.deps.weather_api_key is None: | |
| return {'temperature': '21 °C', 'description': 'Sunny'} | |
| params = {'apikey': ctx.deps.weather_api_key, 'location': f'{lat},{lng}', 'units': 'metric'} | |
| r = await ctx.deps.client.get('https://api.tomorrow.io/v4/weather/realtime', params=params) | |
| r.raise_for_status() | |
| data = r.json() | |
| values = data['data']['values'] | |
| code_lookup = { | |
| 1000: 'Clear, Sunny', 1001: 'Cloudy', 1100: 'Mostly Clear', 1101: 'Partly Cloudy', | |
| 1102: 'Mostly Cloudy', 2000: 'Fog', 2100: 'Light Fog', 4000: 'Drizzle', 4001: 'Rain', | |
| 4200: 'Light Rain', 4201: 'Heavy Rain', 5000: 'Snow', 5001: 'Flurries', | |
| 5100: 'Light Snow', 5101: 'Heavy Snow', 6000: 'Freezing Drizzle', 6001: 'Freezing Rain', | |
| 6200: 'Light Freezing Rain', 6201: 'Heavy Freezing Rain', 7000: 'Ice Pellets', | |
| 7101: 'Heavy Ice Pellets', 7102: 'Light Ice Pellets', 8000: 'Thunderstorm', | |
| } | |
| return { | |
| 'temperature': f'{values["temperatureApparent"]:0.0f}°C', | |
| 'description': code_lookup.get(values['weatherCode'], 'Unknown'), | |
| } | |
| """) | |
| st.markdown("**Wrapper to Run the Agent**") | |
| st.code(""" | |
| async def run_weather_agent(user_input: str): | |
| deps = Deps( | |
| client=client, | |
| weather_api_key = os.getenv("TOMORROW_IO_API_KEY"), | |
| geo_api_key = os.getenv("GEOCODE_API_KEY") | |
| ) | |
| result = await weather_agent.run(user_input, deps=deps) | |
| return result.data | |
| """) | |
| st.markdown("**Streamlit UI with Async Handling**") | |
| st.code(""" | |
| st.set_page_config(page_title="Weather Application", page_icon="🚀") | |
| if "weather_response" not in st.session_state: | |
| st.session_state.weather_response = None | |
| st.title("Weather Agent App") | |
| user_input = st.text_area("Enter a sentence with locations:", "What is the weather like in Bangalore, Chennai and Delhi?") | |
| if st.button("Get Weather"): | |
| with st.spinner("Fetching weather..."): | |
| loop = asyncio.new_event_loop() | |
| asyncio.set_event_loop(loop) | |
| response = loop.run_until_complete(run_weather_agent(user_input)) | |
| st.session_state.weather_response = response | |
| if st.session_state.weather_response: | |
| st.info(st.session_state.weather_response) | |
| """) | |
| with st.expander("Description of Each Step"): | |
| st.markdown(""" | |
| - **Imports**: Brings in all required packages including `httpx`, `logfire`, and `streamlit`. | |
| - **`Deps` Dataclass**: Encapsulates dependencies injected into the agent like the API keys and shared HTTP client. | |
| - **Weather Agent**: Configures an OpenAI GPT-4o-mini agent with tools for geolocation and weather. | |
| - **Tools**: | |
| - `get_lat_lng`: Geocodes a location using a free Maps.co API. Implements retry with exponential backoff. | |
| - `get_weather`: Fetches live weather info from Tomorrow.io using lat/lng. | |
| - **Agent Runner**: Wraps the interaction to run asynchronously with injected dependencies. | |
| - **Streamlit UI**: Captures user input, triggers agent execution, and displays response with `asyncio`. | |
| """) | |
| st.image("https://raw.githubusercontent.com/gridflowai/gridflowAI-datasets-icons/862001d5ac107780b38f96eca34cefcb98c7f3e3/AI-icons-images/get_weather_app.png", | |
| caption="Agentic Weather App Flow", | |
| use_column_width=True) | |
| import os | |
| import asyncio | |
| import streamlit as st | |
| from dataclasses import dataclass | |
| from typing import Any | |
| import logfire | |
| from httpx import AsyncClient | |
| from pydantic_ai import Agent, RunContext, ModelRetry | |
| # Configure logfire | |
| logfire.configure(send_to_logfire='if-token-present') | |
| class Deps: | |
| client: AsyncClient | |
| weather_api_key: str | None | |
| geo_api_key: str | None | |
| weather_agent = Agent( | |
| 'openai:gpt-4o-mini', | |
| system_prompt=( | |
| 'Be concise, reply with one sentence. ' | |
| 'Use the `get_lat_lng` tool to get the latitude and longitude of the locations, ' | |
| 'then use the `get_weather` tool to get the weather.' | |
| ), | |
| deps_type=Deps, | |
| retries=2, | |
| ) | |
| # Create a single global AsyncClient instance | |
| client = AsyncClient() | |
| async def get_lat_lng(ctx: RunContext[Deps], | |
| location_description: str, | |
| max_retries: int = 5, | |
| base_delay: int = 2) -> dict[str, float]: | |
| """Get the latitude and longitude of a location.""" | |
| if ctx.deps.geo_api_key is None: | |
| return {'lat': 51.1, 'lng': -0.1} # Default to London | |
| # Sets up API request parameters. | |
| params = {'q': location_description, 'api_key': ctx.deps.geo_api_key} | |
| # Loops for a maximum number of retries. | |
| for attempt in range(max_retries): | |
| try: | |
| # Logs API call span with parameters. | |
| with logfire.span('calling geocode API', params=params) as span: | |
| # Sends async GET request. | |
| r = await ctx.deps.client.get('https://geocode.maps.co/search', params=params) | |
| # Checks if API rate limit is exceeded. | |
| if r.status_code == 429: # Too Many Requests | |
| wait_time = base_delay * (2 ** attempt) # Exponential backoff | |
| print(f"Rate limited. Retrying in {wait_time} seconds...") | |
| # Waits before retrying. | |
| await asyncio.sleep(wait_time) | |
| # Continues to the next retry attempt. | |
| continue # Retry the request | |
| # Raises an exception for HTTP errors. | |
| r.raise_for_status() | |
| # Parses the API response as JSON. | |
| data = r.json() | |
| # Logs the response data. | |
| span.set_attribute('response', data) | |
| if data: | |
| # Extracts and returns latitude & longitude. | |
| return {'lat': float(data[0]['lat']), 'lng': float(data[0]['lon'])} | |
| else: | |
| # Raises an error if no valid data is found. | |
| raise ModelRetry('Could not find the location') | |
| except Exception as e: # Catches HTTP errors. | |
| print(f"Request failed: {e}") # Logs the failure. | |
| raise ModelRetry('Failed after multiple retries') | |
| async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str, Any]: | |
| """Get the weather at a location.""" | |
| if ctx.deps.weather_api_key is None: | |
| return {'temperature': '21 °C', 'description': 'Sunny'} | |
| params = {'apikey': ctx.deps.weather_api_key, 'location': f'{lat},{lng}', 'units': 'metric'} | |
| r = await ctx.deps.client.get('https://api.tomorrow.io/v4/weather/realtime', params=params) | |
| r.raise_for_status() | |
| data = r.json() | |
| values = data['data']['values'] | |
| code_lookup = { | |
| 1000: 'Clear, Sunny', 1001: 'Cloudy', 1100: 'Mostly Clear', 1101: 'Partly Cloudy', | |
| 1102: 'Mostly Cloudy', 2000: 'Fog', 2100: 'Light Fog', 4000: 'Drizzle', 4001: 'Rain', | |
| 4200: 'Light Rain', 4201: 'Heavy Rain', 5000: 'Snow', 5001: 'Flurries', | |
| 5100: 'Light Snow', 5101: 'Heavy Snow', 6000: 'Freezing Drizzle', 6001: 'Freezing Rain', | |
| 6200: 'Light Freezing Rain', 6201: 'Heavy Freezing Rain', 7000: 'Ice Pellets', | |
| 7101: 'Heavy Ice Pellets', 7102: 'Light Ice Pellets', 8000: 'Thunderstorm', | |
| } | |
| return { | |
| 'temperature': f'{values["temperatureApparent"]:0.0f}°C', | |
| 'description': code_lookup.get(values['weatherCode'], 'Unknown'), | |
| } | |
| async def run_weather_agent(user_input: str): | |
| deps = Deps( | |
| client=client, # Use global client | |
| weather_api_key=os.getenv("TOMORROW_IO_API_KEY"), | |
| geo_api_key=os.getenv("GEOCODE_API_KEY") | |
| ) | |
| result = await weather_agent.run(user_input, deps=deps) | |
| return result.data | |
| # Initialize session state for storing weather responses | |
| if "weather_response" not in st.session_state: | |
| st.session_state.weather_response = None | |
| # Set the page title | |
| #st.set_page_config(page_title="Weather Application", page_icon="🚀") | |
| # Streamlit UI | |
| with st.expander(f"**Example prompts**"): | |
| st.markdown(f""" | |
| Prompt : If I were in Sydney today, would I need a jacket? | |
| Bot : No, you likely wouldn't need a jacket as it's clear and sunny with a temperature of 22°C in Sydney. | |
| Prompt : Tell me whether it's beach weather in Bali and Phuket. | |
| Bot : Bali is too cold at 7°C and partly cloudy for beach weather, while Phuket is warm at 26°C with drizzle, making it more suitable for beach activities. | |
| Prompt : If I had a meeting in Dubai, should I wear light clothing? | |
| Bot : Yes, you should wear light clothing as the temperature in Dubai is currently 25°C and mostly clear. | |
| Prompt : How does today’s temperature in Tokyo compare to the same time last week? | |
| Bot : Today's temperature in Tokyo is 14°C, which is the same as the temperature at the same time last week. | |
| Prompt : Is the current weather suitable for air travel in London and New York? | |
| Bot : The current weather in London is 5°C and cloudy, and in New York, it is -0°C and clear; both conditions are generally suitable for air travel. | |
| """) | |
| user_input = st.text_area("Enter a sentence with locations:", "What is the weather like in Bangalore, Chennai and Delhi?") | |
| # Button to trigger weather fetch | |
| if st.button("Get Weather"): | |
| with st.spinner("Fetching weather..."): | |
| loop = asyncio.new_event_loop() | |
| asyncio.set_event_loop(loop) | |
| response = loop.run_until_complete(run_weather_agent(user_input)) | |
| st.session_state.weather_response = response | |
| # Display stored response | |
| if st.session_state.weather_response: | |
| st.info(st.session_state.weather_response) | |
| with st.expander("🧠 How is this app Agentic?"): | |
| st.markdown(""" | |
| ###### ✅ How this App is Agentic | |
| This weather app demonstrates **Agentic AI** because: | |
| 1. **Goal-Oriented Autonomy** | |
| The user provides a natural language request (e.g., *“What’s the weather in Bangalore and Delhi?”*). | |
| The agent autonomously figures out *how* to fulfill it. | |
| 2. **Tool Usage by the Agent** | |
| The `Agent` uses two tools: | |
| - `get_lat_lng()` – to fetch coordinates via a geocoding API. | |
| - `get_weather()` – to get real-time weather for those coordinates. | |
| The agent determines when and how to use these tools. | |
| 3. **Context + Dependency Injection** | |
| The app uses the `Deps` dataclass to provide the agent with shared dependencies like HTTP clients and API keys—just like a human agent accessing internal tools. | |
| 4. **Retries and Adaptive Behavior** | |
| The agent handles failures and retries via `ModelRetry`, showing resilience and smart retry logic. | |
| 5. **Structured Interactions via `RunContext`** | |
| Each tool runs with access to structured context, enabling better coordination and reuse of shared state. | |
| 6. **LLM-Orchestrated Actions** | |
| At the core, a GPT-4o-mini model orchestrates: | |
| - Understanding the user intent, | |
| - Selecting and invoking the right tools, | |
| - Synthesizing the final response. | |
| > 🧠 **In essence**: This is not just a chatbot, but an *autonomous reasoning engine* that uses real tools to complete real-world goals. | |
| """) | |
| with st.expander("🧪 Example Prompts: Handling Complex Queries"): | |
| st.markdown(""" | |
| This app can understand **natural, varied, and multi-part prompts** thanks to the LLM-based agent at its core. | |
| It intelligently uses `get_lat_lng()` and `get_weather()` tools based on user intent. | |
| ###### 🗣️ Complex Prompt Examples & Responses: | |
| **Prompt:** | |
| *If I were in Sydney today, would I need a jacket?* | |
| **Response:** | |
| *No, you likely wouldn't need a jacket as it's clear and sunny with a temperature of 22°C in Sydney.* | |
| --- | |
| **Prompt:** | |
| *Tell me whether it's beach weather in Bali and Phuket.* | |
| **Response:** | |
| *Bali is too cold at 7°C and partly cloudy for beach weather, while Phuket is warm at 26°C with drizzle, making it more suitable for beach activities.* | |
| --- | |
| **Prompt:** | |
| *If I had a meeting in Dubai, should I wear light clothing?* | |
| **Response:** | |
| *Yes, you should wear light clothing as the temperature in Dubai is currently 25°C and mostly clear.* | |
| --- | |
| **Prompt:** | |
| *How does today’s temperature in Tokyo compare to the same time last week?* | |
| **Response:** | |
| *Today's temperature in Tokyo is 14°C, which is the same as the temperature at the same time last week.* | |
| *(Note: This would require historical API support to be accurate in a real app.)* | |
| --- | |
| **Prompt:** | |
| *Is the current weather suitable for air travel in London and New York?* | |
| **Response:** | |
| *The current weather in London is 5°C and cloudy, and in New York, it is -0°C and clear; both conditions are generally suitable for air travel.* | |
| --- | |
| **Prompt:** | |
| *Give me the weather update for all cities where cricket matches are happening today in India.* | |
| **Response:** | |
| *(This would involve external logic for identifying cricket venues, but the agent can handle the weather lookup part once cities are known.)* | |
| --- | |
| ###### 🧠 Why it Works: | |
| - The **agent extracts all cities** from the prompt, even if mixed with unrelated text. | |
| - It **chains tool calls**: First gets geolocation, then weather. | |
| - The **final response is LLM-crafted** to match the tone and question format (yes/no, suggestion, comparison, etc.). | |
| > ✅ You don’t need to ask "what's the weather in X" exactly — the agent infers it from how humans speak. | |
| """) | |
| with st.expander("🔍 Missing Agentic AI Capabilities & How to Improve"): | |
| st.markdown(""" | |
| While the app exhibits several **agentic behaviors**—like tool use, intent recognition, and multi-step reasoning—it still lacks **some core features** found in *fully agentic systems*. Here's what’s missing: | |
| ###### ❌ Missing Facets & How to Add Them | |
| **1. Autonomy & Proactive Behavior** | |
| *Current:* The app only responds to user prompts. | |
| *To Add:* Let the agent proactively ask follow-ups. | |
| **Example:** | |
| - User: *What's the weather in Italy?* | |
| - Agent: *Italy has multiple cities. Would you like weather in Rome, Milan, or Venice?* | |
| **2. Goal-Oriented Planning** | |
| *Current:* Executes one tool or a fixed chain of tools. | |
| *To Add:* Give it a higher-level goal and let it plan the steps. | |
| **Example:** | |
| - Prompt: *Help me plan a weekend trip to a warm place in Europe.* | |
| - Agent: Finds warm cities, checks weather, compares, and recommends. | |
| **3. Memory / Session Context** | |
| *Current:* Stateless; each query is standalone. | |
| *To Add:* Use LangGraph or crewAI memory modules to **remember past queries** or preferences. | |
| **Example:** | |
| - User: *What’s the weather in Delhi?* | |
| - Then: *And how about tomorrow?* → Agent should know the context refers to Delhi. | |
| **4. Delegation to Sub-Agents** | |
| *Current:* Single-agent, monolithic logic. | |
| *To Add:* Delegate tasks to specialized agents (geocoder agent, weather formatter agent, response stylist, etc.). | |
| **Example:** | |
| - Planner agent decides cities → Fetcher agent retrieves data → Explainer agent summarizes. | |
| **5. Multi-Modal Input/Output** | |
| *Current:* Only text. | |
| *To Add:* Accept voice prompts or generate a weather infographic. | |
| **Example:** | |
| - Prompt: *Voice note saying "Is it rainy in London?"* → Returns image with rainy clouds and summary. | |
| **6. Learning from Feedback** | |
| *Current:* No learning or improvement from user input. | |
| *To Add:* Allow thumbs up/down or feedback to tune responses. | |
| **Example:** | |
| - User: *That was not helpful.* → Agent: *Sorry! Want a more detailed report or city breakdown?* | |
| --- | |
| ###### ✅ Summary | |
| This app **lays a strong foundation for Agentic AI**, but adding these elements would bring it closer to a **truly autonomous, context-aware, and planning-capable agent** that mimics human-level task execution. | |
| """) | |