Humanlearning commited on
Commit
f844f16
·
1 Parent(s): 866a286

updated agent

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +111 -3
  2. README_MULTI_AGENT_SYSTEM.md +244 -0
  3. __pycache__/langgraph_agent_system.cpython-310.pyc +0 -0
  4. __pycache__/langgraph_agent_system.cpython-313.pyc +0 -0
  5. __pycache__/memory_system.cpython-313.pyc +0 -0
  6. __pycache__/observability.cpython-310.pyc +0 -0
  7. __pycache__/observability.cpython-313.pyc +0 -0
  8. __pycache__/tools.cpython-313.pyc +0 -0
  9. agents/__init__.py +21 -0
  10. agents/__pycache__/__init__.cpython-313.pyc +0 -0
  11. agents/__pycache__/answer_formatter.cpython-310.pyc +0 -0
  12. agents/__pycache__/answer_formatter.cpython-313.pyc +0 -0
  13. agents/__pycache__/code_agent.cpython-310.pyc +0 -0
  14. agents/__pycache__/code_agent.cpython-313.pyc +0 -0
  15. agents/__pycache__/lead_agent.cpython-310.pyc +0 -0
  16. agents/__pycache__/lead_agent.cpython-313.pyc +0 -0
  17. agents/__pycache__/research_agent.cpython-310.pyc +0 -0
  18. agents/__pycache__/research_agent.cpython-313.pyc +0 -0
  19. agents/answer_formatter.py +243 -0
  20. agents/code_agent.py +440 -0
  21. agents/lead_agent.py +243 -0
  22. agents/research_agent.py +295 -0
  23. archive/.cursor/rules/archive.mdc +6 -0
  24. ARCHITECTURE.md → archive/ARCHITECTURE.md +0 -0
  25. archive/README.md +1 -0
  26. {prompts → archive/prompts}/critic_prompt.txt +0 -0
  27. {prompts → archive/prompts}/execution_prompt.txt +0 -0
  28. {prompts → archive/prompts}/retrieval_prompt.txt +0 -0
  29. {prompts → archive/prompts}/router_prompt.txt +0 -0
  30. {prompts → archive/prompts}/system_prompt.txt +0 -0
  31. {prompts → archive/prompts}/verification_prompt.txt +0 -0
  32. {src → archive/src}/__init__.py +0 -0
  33. {src → archive/src}/__pycache__/__init__.cpython-313.pyc +0 -0
  34. {src → archive/src}/__pycache__/langgraph_system.cpython-313.pyc +0 -0
  35. {src → archive/src}/__pycache__/memory.cpython-313.pyc +0 -0
  36. {src → archive/src}/__pycache__/tracing.cpython-313.pyc +0 -0
  37. {src → archive/src}/agents/__init__.py +0 -0
  38. {src → archive/src}/agents/__pycache__/__init__.cpython-313.pyc +0 -0
  39. {src → archive/src}/agents/__pycache__/critic_agent.cpython-313.pyc +0 -0
  40. {src → archive/src}/agents/__pycache__/execution_agent.cpython-313.pyc +0 -0
  41. {src → archive/src}/agents/__pycache__/plan_node.cpython-313.pyc +0 -0
  42. {src → archive/src}/agents/__pycache__/retrieval_agent.cpython-313.pyc +0 -0
  43. {src → archive/src}/agents/__pycache__/router_node.cpython-313.pyc +0 -0
  44. {src → archive/src}/agents/__pycache__/verification_node.cpython-313.pyc +0 -0
  45. {src → archive/src}/agents/critic_agent.py +0 -0
  46. {src → archive/src}/agents/execution_agent.py +0 -0
  47. {src → archive/src}/agents/plan_node.py +0 -0
  48. {src → archive/src}/agents/retrieval_agent.py +0 -0
  49. {src → archive/src}/agents/router_node.py +0 -0
  50. {src → archive/src}/agents/verification_node.py +0 -0
README.md CHANGED
@@ -12,16 +12,124 @@ hf_oauth: true
12
  hf_oauth_expiration_minutes: 480
13
  ---
14
 
15
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
16
 
17
- To generate a `requirements.txt` file compatible with **Python 3.10** for deployment on a Hugging Face Space, run the following command:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ```bash
20
  uv pip compile pyproject.toml --python 3.10 -o requirements.txt
21
 
 
22
  # Linux / macOS (bash)
23
  sed -i '/^pywin32==/d' requirements.txt
24
 
25
  # Windows (PowerShell)
26
  (Get-Content requirements.txt) -notmatch '^pywin32==' | Set-Content requirements.txt
27
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  hf_oauth_expiration_minutes: 480
13
  ---
14
 
15
+ # LangGraph Multi-Agent System with Langfuse v3 Observability
16
 
17
+ A sophisticated multi-agent system built with LangGraph that follows best practices for state management, tracing, and iterative workflows. Features comprehensive Langfuse v3 observability with OpenTelemetry integration.
18
+
19
+ ## Architecture Overview
20
+
21
+ The system implements an iterative research/code loop with specialized agents:
22
+
23
+ ```
24
+ User Query → Lead Agent → Research Agent → Code Agent → Lead Agent (loop) → Answer Formatter → Final Answer
25
+ ```
26
+
27
+ ## Key Features
28
+
29
+ - **🤖 Multi-Agent Workflow**: Specialized agents for research, computation, and formatting
30
+ - **📊 Langfuse v3 Observability**: Complete tracing with OTEL integration and predictable span naming
31
+ - **🔄 Iterative Processing**: Intelligent routing between research and computational tasks
32
+ - **🎯 GAIA Compliance**: Exact-match answer formatting for benchmark evaluation
33
+ - **💾 Memory System**: Vector store integration for learning and caching
34
+ - **🛠️ Tool Integration**: Web search, Wikipedia, ArXiv, calculations, and code execution
35
+
36
+ ## Quick Start
37
+
38
+ ### Environment Setup
39
+
40
+ Create an `env.local` file with required API keys:
41
+
42
+ ```bash
43
+ # LLM API
44
+ GROQ_API_KEY=your_groq_api_key
45
+
46
+ # Search Tools
47
+ TAVILY_API_KEY=your_tavily_api_key
48
+
49
+ # Observability (Langfuse v3)
50
+ LANGFUSE_PUBLIC_KEY=your_langfuse_public_key
51
+ LANGFUSE_SECRET_KEY=your_langfuse_secret_key
52
+ LANGFUSE_HOST=https://cloud.langfuse.com
53
+
54
+ # Memory (Optional)
55
+ SUPABASE_URL=your_supabase_url
56
+ SUPABASE_SERVICE_KEY=your_supabase_service_key
57
+ ```
58
+
59
+ ### Running the System
60
+
61
+ **Important**: Use `uv run` for proper dependency management:
62
+
63
+ ```bash
64
+ # Run the multi-agent system test
65
+ uv run python test_new_multi_agent_system.py
66
+
67
+ # Test Langfuse v3 observability
68
+ uv run python test_observability.py
69
+
70
+ # Run the main application
71
+ uv run python app.py
72
+ ```
73
+
74
+ ### Basic Usage
75
+
76
+ ```python
77
+ import asyncio
78
+ from langgraph_agent_system import run_agent_system
79
+
80
+ async def main():
81
+ result = await run_agent_system(
82
+ query="What is the capital of Maharashtra?",
83
+ user_id="user_123",
84
+ session_id="session_456"
85
+ )
86
+ print(f"Answer: {result}")
87
+
88
+ asyncio.run(main())
89
+ ```
90
+
91
+ ## Observability Dashboard
92
+
93
+ After running queries, check your traces at: **https://cloud.langfuse.com**
94
+
95
+ The system provides:
96
+ - 🎯 **Predictable Span Naming**: `agent/<role>`, `tool/<name>`, `llm/<model>`
97
+ - 🔗 **Session Tracking**: User and session continuity across conversations
98
+ - 📈 **Cost & Latency Metrics**: Automatic aggregation by span type
99
+ - 🌐 **OTEL Integration**: Automatic trace correlation across services
100
+
101
+ ## Deployment
102
+
103
+ ### For Hugging Face Spaces
104
+
105
+ To generate a `requirements.txt` file compatible with **Python 3.10** for deployment:
106
 
107
  ```bash
108
  uv pip compile pyproject.toml --python 3.10 -o requirements.txt
109
 
110
+ # Remove Windows-specific packages
111
  # Linux / macOS (bash)
112
  sed -i '/^pywin32==/d' requirements.txt
113
 
114
  # Windows (PowerShell)
115
  (Get-Content requirements.txt) -notmatch '^pywin32==' | Set-Content requirements.txt
116
+ ```
117
+
118
+ ### Environment Variables for Production
119
+
120
+ Set these in your deployment environment:
121
+ - `GROQ_API_KEY` - Required for LLM inference
122
+ - `TAVILY_API_KEY` - Required for web search
123
+ - `LANGFUSE_PUBLIC_KEY` - Required for observability
124
+ - `LANGFUSE_SECRET_KEY` - Required for observability
125
+ - `LANGFUSE_HOST` - Langfuse endpoint (default: https://cloud.langfuse.com)
126
+
127
+ ## Documentation
128
+
129
+ For detailed architecture, configuration, and usage instructions, see:
130
+ - **[Multi-Agent System Guide](README_MULTI_AGENT_SYSTEM.md)** - Complete system documentation
131
+ - **[Supabase Setup](README_SUPABASE.md)** - Memory system configuration
132
+
133
+ ## Configuration Reference
134
+
135
+ Check out the Hugging Face Spaces configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
README_MULTI_AGENT_SYSTEM.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LangGraph Multi-Agent System
2
+
3
+ A sophisticated multi-agent system built with LangGraph that follows best practices for state management, tracing, and iterative workflows.
4
+
5
+ ## Architecture Overview
6
+
7
+ The system implements an iterative research/code loop with specialized agents:
8
+
9
+ ```
10
+ User Query → Lead Agent → Research Agent → Code Agent → Lead Agent (loop) → Answer Formatter → Final Answer
11
+ ```
12
+
13
+ ### Key Components
14
+
15
+ 1. **Lead Agent** (`agents/lead_agent.py`)
16
+ - Orchestrates the entire workflow
17
+ - Makes routing decisions between research and code agents
18
+ - Manages the iterative loop with a maximum of 3 iterations
19
+ - Synthesizes information from specialists into draft answers
20
+
21
+ 2. **Research Agent** (`agents/research_agent.py`)
22
+ - Handles information gathering from multiple sources
23
+ - Uses web search (Tavily), Wikipedia, and ArXiv tools
24
+ - Provides structured research results with citations
25
+
26
+ 3. **Code Agent** (`agents/code_agent.py`)
27
+ - Performs mathematical calculations and code execution
28
+ - Uses calculator tools for basic operations
29
+ - Executes Python code in a sandboxed environment
30
+ - Handles Hugging Face Hub statistics
31
+
32
+ 4. **Answer Formatter** (`agents/answer_formatter.py`)
33
+ - Ensures GAIA benchmark compliance
34
+ - Extracts final answers according to exact-match rules
35
+ - Handles different answer types (numbers, strings, lists)
36
+
37
+ 5. **Memory System** (`memory_system.py`)
38
+ - Vector store integration for long-term learning
39
+ - Session-based caching for performance
40
+ - Similar question retrieval for context
41
+
42
+ ## Core Features
43
+
44
+ ### State Management
45
+ - **Immutable State**: Uses LangGraph's Command pattern for pure functions
46
+ - **Typed Schema**: AgentState TypedDict ensures type safety
47
+ - **Accumulation**: Research notes and code outputs accumulate across iterations
48
+
49
+ ### Observability (Langfuse v3)
50
+ - **OTEL-Native Integration**: Uses Langfuse v3 with OpenTelemetry for automatic trace correlation
51
+ - **Single Callback Handler**: One global handler passes traces seamlessly through LangGraph
52
+ - **Predictable Span Naming**: `agent/<role>`, `tool/<name>`, `llm/<model>` patterns for cost/latency dashboards
53
+ - **Session Stitching**: User and session tracking for conversation continuity
54
+ - **Background Flushing**: Non-blocking trace export for optimal performance
55
+
56
+ ### Tools Integration
57
+ - **Web Search**: Tavily API for current information
58
+ - **Knowledge Bases**: Wikipedia and ArXiv for encyclopedic/academic content
59
+ - **Computation**: Calculator tools and Python execution
60
+ - **Hub Statistics**: Hugging Face model information
61
+
62
+ ## Setup
63
+
64
+ ### Environment Variables
65
+ Create an `env.local` file with:
66
+
67
+ ```bash
68
+ # LLM API
69
+ GROQ_API_KEY=your_groq_api_key
70
+
71
+ # Search Tools
72
+ TAVILY_API_KEY=your_tavily_api_key
73
+
74
+ # Observability
75
+ LANGFUSE_PUBLIC_KEY=your_langfuse_public_key
76
+ LANGFUSE_SECRET_KEY=your_langfuse_secret_key
77
+ LANGFUSE_HOST=https://cloud.langfuse.com
78
+
79
+ # Memory (Optional)
80
+ SUPABASE_URL=your_supabase_url
81
+ SUPABASE_SERVICE_KEY=your_supabase_service_key
82
+ ```
83
+
84
+ ### Dependencies
85
+ The system requires:
86
+ - `langgraph>=0.4.8`
87
+ - `langchain>=0.3.0`
88
+ - `langchain-groq`
89
+ - `langfuse>=3.0.0`
90
+ - `python-dotenv`
91
+ - `tavily-python`
92
+
93
+ ## Usage
94
+
95
+ ### Basic Usage
96
+
97
+ ```python
98
+ import asyncio
99
+ from langgraph_agent_system import run_agent_system
100
+
101
+ async def main():
102
+ result = await run_agent_system(
103
+ query="What is the capital of Maharashtra?",
104
+ user_id="user_123",
105
+ session_id="session_456"
106
+ )
107
+ print(f"Answer: {result}")
108
+
109
+ asyncio.run(main())
110
+ ```
111
+
112
+ ### Testing
113
+
114
+ Run the test suite to verify functionality:
115
+
116
+ ```bash
117
+ python test_new_multi_agent_system.py
118
+ ```
119
+
120
+ Test Langfuse v3 observability integration:
121
+
122
+ ```bash
123
+ python test_observability.py
124
+ ```
125
+
126
+ ### Direct Graph Access
127
+
128
+ ```python
129
+ from langgraph_agent_system import create_agent_graph
130
+
131
+ # Create and compile the workflow
132
+ workflow = create_agent_graph()
133
+ app = workflow.compile()
134
+
135
+ # Run with initial state
136
+ initial_state = {
137
+ "messages": [HumanMessage(content="Your question")],
138
+ "draft_answer": "",
139
+ "research_notes": "",
140
+ "code_outputs": "",
141
+ "loop_counter": 0,
142
+ "done": False,
143
+ "next": "research",
144
+ "final_answer": "",
145
+ "user_id": "user_123",
146
+ "session_id": "session_456"
147
+ }
148
+
149
+ final_state = await app.ainvoke(initial_state)
150
+ print(final_state["final_answer"])
151
+ ```
152
+
153
+ ## Workflow Details
154
+
155
+ ### Iterative Loop
156
+ 1. **Lead Agent** analyzes the query and decides on next action
157
+ 2. If research needed → **Research Agent** gathers information
158
+ 3. If computation needed → **Code Agent** performs calculations
159
+ 4. Back to **Lead Agent** for synthesis and next decision
160
+ 5. When sufficient information → **Answer Formatter** creates final answer
161
+
162
+ ### Routing Logic
163
+ The Lead Agent uses the following criteria:
164
+ - **Research**: Factual information, current events, citations needed
165
+ - **Code**: Mathematical calculations, data analysis, programming tasks
166
+ - **Formatter**: Sufficient information gathered OR max iterations reached
167
+
168
+ ### GAIA Compliance
169
+ The Answer Formatter ensures exact-match requirements:
170
+ - **Numbers**: No commas, units, or extra symbols
171
+ - **Strings**: Remove unnecessary articles and formatting
172
+ - **Lists**: Comma and space separation
173
+ - **No surrounding text**: No "Answer:", quotes, or brackets
174
+
175
+ ## Best Practices Implemented
176
+
177
+ ### LangGraph Patterns
178
+ - ✅ Pure functions (AgentState → Command)
179
+ - ✅ Immutable state with explicit updates
180
+ - ✅ Typed state schema with operator annotations
181
+ - ✅ Clear routing separated from business logic
182
+
183
+ ### Langfuse v3 Observability
184
+ - ✅ OTEL-native SDK with automatic trace correlation
185
+ - ✅ Single global callback handler for seamless LangGraph integration
186
+ - ✅ Predictable span naming (`agent/<role>`, `tool/<name>`, `llm/<model>`)
187
+ - ✅ Session and user tracking with environment tagging
188
+ - ✅ Background trace flushing for performance
189
+ - ✅ Graceful degradation when observability unavailable
190
+
191
+ ### Memory Management
192
+ - ✅ TTL-based caching for performance
193
+ - ✅ Vector store integration for learning
194
+ - ✅ Duplicate detection and prevention
195
+ - ✅ Session cleanup for long-running instances
196
+
197
+ ## Error Handling
198
+
199
+ The system implements graceful degradation:
200
+ - **Tool failures**: Continue with available tools
201
+ - **API timeouts**: Retry with backoff
202
+ - **Memory errors**: Degrade to LLM-only mode
203
+ - **Agent failures**: Return informative error messages
204
+
205
+ ## Performance Considerations
206
+
207
+ - **Caching**: Vector store searches cached for 5 minutes
208
+ - **Parallelization**: Tools can be executed in parallel
209
+ - **Memory limits**: Sandbox execution has resource constraints
210
+ - **Loop termination**: Hard limit of 3 iterations prevents infinite loops
211
+
212
+ ## Extending the System
213
+
214
+ ### Adding New Agents
215
+ 1. Create agent file in `agents/` directory
216
+ 2. Implement agent function returning Command
217
+ 3. Add to workflow in `create_agent_graph()`
218
+ 4. Update routing logic in Lead Agent
219
+
220
+ ### Adding New Tools
221
+ 1. Implement tool following LangChain Tool interface
222
+ 2. Add to appropriate agent's tool list
223
+ 3. Update agent prompts to describe new capabilities
224
+
225
+ ### Custom Memory Backends
226
+ 1. Extend MemoryManager class
227
+ 2. Implement required interface methods
228
+ 3. Update initialization in memory_system.py
229
+
230
+ ## Troubleshooting
231
+
232
+ ### Common Issues
233
+ - **Missing API keys**: Check env.local file setup
234
+ - **Tool failures**: Verify network connectivity and API quotas
235
+ - **Memory errors**: Check Supabase configuration (optional)
236
+ - **Import errors**: Ensure all dependencies are installed
237
+
238
+ ### Debug Mode
239
+ Set environment variable for detailed logging:
240
+ ```bash
241
+ export LANGFUSE_DEBUG=true
242
+ ```
243
+
244
+ This implementation follows the specified plan while incorporating LangGraph and Langfuse best practices for a robust, observable, and maintainable multi-agent system.
__pycache__/langgraph_agent_system.cpython-310.pyc ADDED
Binary file (4.84 kB). View file
 
__pycache__/langgraph_agent_system.cpython-313.pyc ADDED
Binary file (6.62 kB). View file
 
__pycache__/memory_system.cpython-313.pyc ADDED
Binary file (7.93 kB). View file
 
__pycache__/observability.cpython-310.pyc ADDED
Binary file (5.55 kB). View file
 
__pycache__/observability.cpython-313.pyc ADDED
Binary file (7.7 kB). View file
 
__pycache__/tools.cpython-313.pyc CHANGED
Binary files a/__pycache__/tools.cpython-313.pyc and b/__pycache__/tools.cpython-313.pyc differ
 
agents/__init__.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Multi-Agent System Components
3
+
4
+ This package contains the specialized agents for the LangGraph-based system:
5
+ - LeadAgent: Orchestrates workflow and decision making
6
+ - ResearchAgent: Information gathering and research tasks
7
+ - CodeAgent: Computational and code execution tasks
8
+ - AnswerFormatter: Final answer formatting according to GAIA requirements
9
+ """
10
+
11
+ from .lead_agent import lead_agent
12
+ from .research_agent import research_agent
13
+ from .code_agent import code_agent
14
+ from .answer_formatter import answer_formatter
15
+
16
+ __all__ = [
17
+ "lead_agent",
18
+ "research_agent",
19
+ "code_agent",
20
+ "answer_formatter"
21
+ ]
agents/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (758 Bytes). View file
 
agents/__pycache__/answer_formatter.cpython-310.pyc ADDED
Binary file (5.94 kB). View file
 
agents/__pycache__/answer_formatter.cpython-313.pyc ADDED
Binary file (8.53 kB). View file
 
agents/__pycache__/code_agent.cpython-310.pyc ADDED
Binary file (12.1 kB). View file
 
agents/__pycache__/code_agent.cpython-313.pyc ADDED
Binary file (18.7 kB). View file
 
agents/__pycache__/lead_agent.cpython-310.pyc ADDED
Binary file (6.36 kB). View file
 
agents/__pycache__/lead_agent.cpython-313.pyc ADDED
Binary file (8.58 kB). View file
 
agents/__pycache__/research_agent.cpython-310.pyc ADDED
Binary file (7.22 kB). View file
 
agents/__pycache__/research_agent.cpython-313.pyc ADDED
Binary file (10.8 kB). View file
 
agents/answer_formatter.py ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Answer Formatter - Final answer formatting according to GAIA requirements
3
+
4
+ The Answer Formatter is responsible for:
5
+ 1. Taking the draft answer and formatting it according to GAIA rules
6
+ 2. Extracting the final answer from comprehensive responses
7
+ 3. Ensuring exact-match compliance
8
+ 4. Handling different answer types (numbers, strings, lists)
9
+ """
10
+
11
+ import re
12
+ from typing import Dict, Any
13
+ from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage
14
+ from langgraph.types import Command
15
+ from langchain_groq import ChatGroq
16
+ from observability import agent_span
17
+ from dotenv import load_dotenv
18
+
19
+ load_dotenv("env.local")
20
+
21
+
22
+ def extract_final_answer(text: str) -> str:
23
+ """
24
+ Extract the final answer from text following GAIA formatting rules.
25
+
26
+ GAIA Rules:
27
+ • Single number → write the number only (no commas, units, or other symbols)
28
+ • Single string/phrase → write the text only; omit articles and abbreviations unless explicitly required
29
+ • List → separate elements with a single comma and a space
30
+ • Never include surrounding text, quotes, brackets, or markdown
31
+ """
32
+
33
+ if not text or not text.strip():
34
+ return ""
35
+
36
+ # Clean the text
37
+ text = text.strip()
38
+
39
+ # Look for explicit final answer markers
40
+ answer_patterns = [
41
+ r"final answer[:\s]*(.+?)(?:\n|$)",
42
+ r"answer[:\s]*(.+?)(?:\n|$)",
43
+ r"result[:\s]*(.+?)(?:\n|$)",
44
+ r"conclusion[:\s]*(.+?)(?:\n|$)"
45
+ ]
46
+
47
+ for pattern in answer_patterns:
48
+ match = re.search(pattern, text, re.IGNORECASE | re.MULTILINE)
49
+ if match:
50
+ text = match.group(1).strip()
51
+ break
52
+
53
+ # Remove common prefixes/suffixes
54
+ prefixes_to_remove = [
55
+ "the answer is", "it is", "this is", "that is",
56
+ "final answer:", "answer:", "result:", "conclusion:",
57
+ "therefore", "thus", "so", "hence"
58
+ ]
59
+
60
+ for prefix in prefixes_to_remove:
61
+ if text.lower().startswith(prefix.lower()):
62
+ text = text[len(prefix):].strip()
63
+
64
+ # Remove quotes, brackets, and markdown
65
+ text = re.sub(r'^["\'\[\(]|["\'\]\)]$', '', text)
66
+ text = re.sub(r'^\*\*|\*\*$', '', text) # Remove bold markdown
67
+ text = re.sub(r'^`|`$', '', text) # Remove code markdown
68
+
69
+ # Handle different answer types
70
+
71
+ # Check if it's a pure number
72
+ number_match = re.match(r'^-?\d+(?:\.\d+)?$', text.strip())
73
+ if number_match:
74
+ # Return number without formatting
75
+ num = float(text.strip()) if '.' in text else int(text.strip())
76
+ return str(int(num)) if num == int(num) else str(num)
77
+
78
+ # Check if it's a list (comma-separated)
79
+ if ',' in text:
80
+ items = [item.strip() for item in text.split(',')]
81
+ # Clean each item
82
+ cleaned_items = []
83
+ for item in items:
84
+ item = re.sub(r'^["\'\[\(]|["\'\]\)]$', '', item.strip())
85
+ if item:
86
+ cleaned_items.append(item)
87
+ return ', '.join(cleaned_items)
88
+
89
+ # For single strings, remove articles if they're not essential
90
+ # But be careful not to remove essential parts
91
+ words = text.split()
92
+ if len(words) > 1 and words[0].lower() in ['the', 'a', 'an']:
93
+ # Only remove if the rest makes sense
94
+ remaining = ' '.join(words[1:])
95
+ if remaining and len(remaining) > 2:
96
+ text = remaining
97
+
98
+ return text.strip()
99
+
100
+
101
+ def load_formatter_prompt() -> str:
102
+ """Load the formatting prompt"""
103
+ try:
104
+ with open("archive/prompts/verification_prompt.txt", "r") as f:
105
+ return f.read()
106
+ except FileNotFoundError:
107
+ return """
108
+ You are a final answer formatter ensuring compliance with GAIA benchmark requirements.
109
+
110
+ Your task is to extract the precise final answer from a comprehensive response.
111
+
112
+ CRITICAL FORMATTING RULES:
113
+ • Single number → write the number only (no commas, units, or symbols)
114
+ • Single string/phrase → write the text only; omit articles unless required
115
+ • List → separate elements with comma and space
116
+ • NEVER include surrounding text like "Final Answer:", quotes, brackets, or markdown
117
+ • The response must contain ONLY the answer itself
118
+
119
+ Examples:
120
+ Question: "What is 25 + 17?"
121
+ Draft: "After calculating, the answer is 42."
122
+ Formatted: "42"
123
+
124
+ Question: "What is the capital of France?"
125
+ Draft: "The capital of France is Paris."
126
+ Formatted: "Paris"
127
+
128
+ Question: "List the first 3 prime numbers"
129
+ Draft: "The first three prime numbers are 2, 3, and 5."
130
+ Formatted: "2, 3, 5"
131
+
132
+ Extract ONLY the final answer following these rules exactly.
133
+ """
134
+
135
+
136
+ def answer_formatter(state: Dict[str, Any]) -> Command:
137
+ """
138
+ Answer Formatter node that creates GAIA-compliant final answers.
139
+
140
+ Takes the draft_answer and formats it according to GAIA requirements.
141
+ Returns Command to END the workflow.
142
+ """
143
+
144
+ print("📝 Answer Formatter: Creating final formatted answer...")
145
+
146
+ try:
147
+ # Get formatting prompt
148
+ formatter_prompt = load_formatter_prompt()
149
+
150
+ # Initialize LLM for formatting
151
+ llm = ChatGroq(
152
+ model="llama-3.3-70b-versatile",
153
+ temperature=0.0, # Zero temperature for consistent formatting
154
+ max_tokens=512
155
+ )
156
+
157
+ # Create agent span for tracing
158
+ with agent_span(
159
+ "formatter",
160
+ metadata={
161
+ "draft_answer_length": len(state.get("draft_answer", "")),
162
+ "user_id": state.get("user_id", "unknown"),
163
+ "session_id": state.get("session_id", "unknown")
164
+ }
165
+ ) as span:
166
+
167
+ # Get the draft answer
168
+ draft_answer = state.get("draft_answer", "")
169
+
170
+ if not draft_answer:
171
+ final_answer = "No answer could be generated."
172
+ else:
173
+ # Get the original question for context
174
+ messages = state.get("messages", [])
175
+ user_query = ""
176
+ for msg in messages:
177
+ if isinstance(msg, HumanMessage):
178
+ user_query = msg.content
179
+ break
180
+
181
+ # Build formatting request
182
+ formatting_request = f"""
183
+ Extract the final answer from this comprehensive response following GAIA formatting rules:
184
+
185
+ Original Question: {user_query}
186
+
187
+ Draft Response:
188
+ {draft_answer}
189
+
190
+ Instructions:
191
+ 1. Identify the core answer within the draft response
192
+ 2. Remove all explanatory text, prefixes, and formatting
193
+ 3. Apply GAIA formatting rules exactly
194
+ 4. Return ONLY the final answer
195
+
196
+ What is the properly formatted final answer?
197
+ """
198
+
199
+ # Create messages for formatting
200
+ formatting_messages = [
201
+ SystemMessage(content=formatter_prompt),
202
+ HumanMessage(content=formatting_request)
203
+ ]
204
+
205
+ # Get formatted response
206
+ response = llm.invoke(formatting_messages)
207
+
208
+ # Extract the final answer using our utility function
209
+ final_answer = extract_final_answer(response.content)
210
+
211
+ # Fallback: if extraction fails, try direct extraction from draft
212
+ if not final_answer or len(final_answer) < 1:
213
+ print("⚠️ LLM formatting failed, using direct extraction")
214
+ final_answer = extract_final_answer(draft_answer)
215
+
216
+ # Final fallback
217
+ if not final_answer:
218
+ final_answer = "Unable to extract a clear answer."
219
+
220
+ print(f"📝 Answer Formatter: Final answer = '{final_answer}'")
221
+
222
+ # Update trace
223
+ if span:
224
+ span.update_trace(output={"final_answer": final_answer})
225
+
226
+ # Return command to END the workflow
227
+ return Command(
228
+ goto="__end__",
229
+ update={
230
+ "final_answer": final_answer
231
+ }
232
+ )
233
+
234
+ except Exception as e:
235
+ print(f"❌ Answer Formatter Error: {e}")
236
+
237
+ # Return error as final answer
238
+ return Command(
239
+ goto="__end__",
240
+ update={
241
+ "final_answer": f"Error formatting answer: {str(e)}"
242
+ }
243
+ )
agents/code_agent.py ADDED
@@ -0,0 +1,440 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Code Agent - Computational tasks and code execution
3
+
4
+ The Code Agent is responsible for:
5
+ 1. Performing mathematical calculations
6
+ 2. Executing Python code for data analysis
7
+ 3. Processing numerical data and computations
8
+ 4. Returning structured computational results
9
+ """
10
+
11
+ import os
12
+ import sys
13
+ import io
14
+ import contextlib
15
+ from typing import Dict, Any, List
16
+ from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage, AIMessage
17
+ from langgraph.types import Command
18
+ from langchain_groq import ChatGroq
19
+ from langchain_core.tools import Tool
20
+ from observability import agent_span, tool_span
21
+ from dotenv import load_dotenv
22
+
23
+ # Import calculator tools from the existing tools.py
24
+ from tools import get_calculator_tool, get_hub_stats_tool
25
+
26
+ load_dotenv("env.local")
27
+
28
+
29
+ def create_code_tools() -> List[Tool]:
30
+ """Create LangChain-compatible computational tools"""
31
+ tools = []
32
+
33
+ # Mathematical calculator tools
34
+ def multiply_func(a: float, b: float) -> str:
35
+ """Multiply two numbers"""
36
+ try:
37
+ result = a * b
38
+ return f"{a} × {b} = {result}"
39
+ except Exception as e:
40
+ return f"Error: {str(e)}"
41
+
42
+ def add_func(a: float, b: float) -> str:
43
+ """Add two numbers"""
44
+ try:
45
+ result = a + b
46
+ return f"{a} + {b} = {result}"
47
+ except Exception as e:
48
+ return f"Error: {str(e)}"
49
+
50
+ def subtract_func(a: float, b: float) -> str:
51
+ """Subtract two numbers"""
52
+ try:
53
+ result = a - b
54
+ return f"{a} - {b} = {result}"
55
+ except Exception as e:
56
+ return f"Error: {str(e)}"
57
+
58
+ def divide_func(a: float, b: float) -> str:
59
+ """Divide two numbers"""
60
+ try:
61
+ if b == 0:
62
+ return "Error: Cannot divide by zero"
63
+ result = a / b
64
+ return f"{a} ÷ {b} = {result}"
65
+ except Exception as e:
66
+ return f"Error: {str(e)}"
67
+
68
+ def modulus_func(a: int, b: int) -> str:
69
+ """Get the modulus of two numbers"""
70
+ try:
71
+ if b == 0:
72
+ return "Error: Cannot modulo by zero"
73
+ result = a % b
74
+ return f"{a} mod {b} = {result}"
75
+ except Exception as e:
76
+ return f"Error: {str(e)}"
77
+
78
+ # Create calculator tools
79
+ calc_tools = [
80
+ Tool(name="multiply", description="Multiply two numbers. Use format: multiply(a, b)", func=lambda input_str: multiply_func(*map(float, input_str.split(',')))),
81
+ Tool(name="add", description="Add two numbers. Use format: add(a, b)", func=lambda input_str: add_func(*map(float, input_str.split(',')))),
82
+ Tool(name="subtract", description="Subtract two numbers. Use format: subtract(a, b)", func=lambda input_str: subtract_func(*map(float, input_str.split(',')))),
83
+ Tool(name="divide", description="Divide two numbers. Use format: divide(a, b)", func=lambda input_str: divide_func(*map(float, input_str.split(',')))),
84
+ Tool(name="modulus", description="Get modulus of two integers. Use format: modulus(a, b)", func=lambda input_str: modulus_func(*map(int, input_str.split(',')))),
85
+ ]
86
+
87
+ tools.extend(calc_tools)
88
+ print(f"✅ Added {len(calc_tools)} calculator tools")
89
+
90
+ # Hub stats tool
91
+ try:
92
+ from tools import get_hub_stats
93
+
94
+ def hub_stats_func(author: str) -> str:
95
+ """Get Hugging Face Hub statistics for an author"""
96
+ try:
97
+ return get_hub_stats(author)
98
+ except Exception as e:
99
+ return f"Hub stats error: {str(e)}"
100
+
101
+ hub_tool = Tool(
102
+ name="hub_stats",
103
+ description="Get statistics for Hugging Face Hub models by author",
104
+ func=hub_stats_func
105
+ )
106
+ tools.append(hub_tool)
107
+ print("✅ Added Hub stats tool")
108
+ except Exception as e:
109
+ print(f"⚠️ Could not load Hub stats tool: {e}")
110
+
111
+ # Python execution tool
112
+ python_tool = create_python_execution_tool()
113
+ tools.append(python_tool)
114
+ print("✅ Added Python execution tool")
115
+
116
+ print(f"🔧 Code Agent loaded {len(tools)} tools")
117
+ return tools
118
+
119
+
120
+ def create_python_execution_tool() -> Tool:
121
+ """Create a tool for executing Python code safely"""
122
+
123
+ def execute_python_code(code: str) -> str:
124
+ """
125
+ Execute Python code in a controlled environment.
126
+
127
+ Args:
128
+ code: Python code to execute
129
+
130
+ Returns:
131
+ String containing the output or error message
132
+ """
133
+ # Create a string buffer to capture output
134
+ output_buffer = io.StringIO()
135
+ error_buffer = io.StringIO()
136
+
137
+ # Prepare a safe execution environment
138
+ safe_globals = {
139
+ '__builtins__': {
140
+ 'print': lambda *args, **kwargs: print(*args, file=output_buffer, **kwargs),
141
+ 'len': len,
142
+ 'str': str,
143
+ 'int': int,
144
+ 'float': float,
145
+ 'list': list,
146
+ 'dict': dict,
147
+ 'set': set,
148
+ 'tuple': tuple,
149
+ 'range': range,
150
+ 'sum': sum,
151
+ 'max': max,
152
+ 'min': min,
153
+ 'abs': abs,
154
+ 'round': round,
155
+ 'sorted': sorted,
156
+ 'enumerate': enumerate,
157
+ 'zip': zip,
158
+ 'map': map,
159
+ 'filter': filter,
160
+ }
161
+ }
162
+
163
+ # Allow common safe modules
164
+ try:
165
+ import math
166
+ import statistics
167
+ import datetime
168
+ import json
169
+ import re
170
+
171
+ safe_globals.update({
172
+ 'math': math,
173
+ 'statistics': statistics,
174
+ 'datetime': datetime,
175
+ 'json': json,
176
+ 're': re,
177
+ })
178
+ except ImportError:
179
+ pass
180
+
181
+ try:
182
+ # Execute the code
183
+ with contextlib.redirect_stdout(output_buffer), \
184
+ contextlib.redirect_stderr(error_buffer):
185
+ exec(code, safe_globals)
186
+
187
+ # Get the output
188
+ output = output_buffer.getvalue()
189
+ error = error_buffer.getvalue()
190
+
191
+ if error:
192
+ return f"Error: {error}"
193
+ elif output:
194
+ return output.strip()
195
+ else:
196
+ return "Code executed successfully (no output)"
197
+
198
+ except Exception as e:
199
+ return f"Execution error: {str(e)}"
200
+ finally:
201
+ output_buffer.close()
202
+ error_buffer.close()
203
+
204
+ return Tool(
205
+ name="python_execution",
206
+ description="Execute Python code for calculations and data processing. Use for complex computations, data analysis, or when calculator tools are insufficient.",
207
+ func=execute_python_code
208
+ )
209
+
210
+
211
+ def load_code_prompt() -> str:
212
+ """Load the code execution prompt"""
213
+ try:
214
+ with open("archive/prompts/execution_prompt.txt", "r") as f:
215
+ return f.read()
216
+ except FileNotFoundError:
217
+ return """
218
+ You are a computational specialist focused on accurate calculations and code execution.
219
+
220
+ Your goals:
221
+ 1. Perform mathematical calculations accurately
222
+ 2. Write and execute Python code for complex computations
223
+ 3. Process data and perform analysis as needed
224
+ 4. Provide clear, numerical results
225
+
226
+ When handling computational tasks:
227
+ - Use calculator tools for basic arithmetic operations
228
+ - Use Python execution for complex calculations, data processing, or multi-step computations
229
+ - Show your work and intermediate steps
230
+ - Verify results when possible
231
+ - Handle edge cases and potential errors
232
+
233
+ Available tools:
234
+ - Calculator tools: add, subtract, multiply, divide, modulus
235
+ - Python execution: for complex computations and data analysis
236
+ - Hub stats tool: for Hugging Face model information
237
+
238
+ Format your response as:
239
+ ### Computational Analysis
240
+ [Description of the approach]
241
+
242
+ ### Calculations
243
+ [Step-by-step calculations or code]
244
+
245
+ ### Results
246
+ [Final numerical results or outputs]
247
+ """
248
+
249
+
250
+ def code_agent(state: Dict[str, Any]) -> Command:
251
+ """
252
+ Code Agent node that handles computational tasks and code execution.
253
+
254
+ Returns Command with computational results appended to code_outputs.
255
+ """
256
+
257
+ print("🧮 Code Agent: Processing computational tasks...")
258
+
259
+ try:
260
+ # Get code execution prompt
261
+ code_prompt = load_code_prompt()
262
+
263
+ # Initialize LLM with tools
264
+ llm = ChatGroq(
265
+ model="llama-3.3-70b-versatile",
266
+ temperature=0.1, # Low temperature for accuracy in calculations
267
+ max_tokens=2048
268
+ )
269
+
270
+ # Get computational tools
271
+ tools = create_code_tools()
272
+
273
+ # Bind tools to LLM
274
+ llm_with_tools = llm.bind_tools(tools)
275
+
276
+ # Create agent span for tracing
277
+ with agent_span(
278
+ "code",
279
+ metadata={
280
+ "tools_available": len(tools),
281
+ "research_context_length": len(state.get("research_notes", "")),
282
+ "user_id": state.get("user_id", "unknown"),
283
+ "session_id": state.get("session_id", "unknown")
284
+ }
285
+ ) as span:
286
+
287
+ # Extract user query and research context
288
+ messages = state.get("messages", [])
289
+ user_query = ""
290
+ for msg in messages:
291
+ if isinstance(msg, HumanMessage):
292
+ user_query = msg.content
293
+ break
294
+
295
+ research_notes = state.get("research_notes", "")
296
+
297
+ # Build computational request
298
+ code_request = f"""
299
+ Please analyze the following question and perform any necessary calculations or code execution:
300
+
301
+ Question: {user_query}
302
+
303
+ Research Context:
304
+ {research_notes}
305
+
306
+ Current computational work: {len(state.get('code_outputs', ''))} characters already completed
307
+
308
+ Instructions:
309
+ 1. Identify any computational or mathematical aspects of the question
310
+ 2. Use appropriate tools for calculations or code execution
311
+ 3. Show your work and intermediate steps
312
+ 4. Provide clear, accurate results
313
+ 5. If no computation is needed, state that clearly
314
+
315
+ Please perform all necessary calculations to help answer this question.
316
+ """
317
+
318
+ # Create messages for code execution
319
+ code_messages = [
320
+ SystemMessage(content=code_prompt),
321
+ HumanMessage(content=code_request)
322
+ ]
323
+
324
+ # Get computational response
325
+ response = llm_with_tools.invoke(code_messages)
326
+
327
+ # Process the response - handle both tool calls and direct responses
328
+ computation_results = []
329
+
330
+ # Check if the LLM wants to use tools
331
+ if hasattr(response, 'tool_calls') and response.tool_calls:
332
+ print(f"🛠️ Executing {len(response.tool_calls)} computational operations")
333
+
334
+ # Execute tool calls and collect results
335
+ for tool_call in response.tool_calls:
336
+ try:
337
+ # Find the tool by name
338
+ tool = next((t for t in tools if t.name == tool_call['name']), None)
339
+ if tool:
340
+ # Handle different argument formats
341
+ args = tool_call.get('args', {})
342
+ if isinstance(args, dict):
343
+ # Convert dict args to string for simple tools
344
+ if len(args) == 1:
345
+ arg_value = list(args.values())[0]
346
+ else:
347
+ arg_value = ','.join(str(v) for v in args.values())
348
+ else:
349
+ arg_value = str(args)
350
+
351
+ result = tool.func(arg_value)
352
+ computation_results.append(f"**{tool.name}**: {result}")
353
+ else:
354
+ computation_results.append(f"**{tool_call['name']}**: Tool not found")
355
+ except Exception as e:
356
+ print(f"⚠️ Tool {tool_call.get('name', 'unknown')} failed: {e}")
357
+ computation_results.append(f"**{tool_call.get('name', 'unknown')}**: Error - {str(e)}")
358
+
359
+ # Compile computational results
360
+ if computation_results:
361
+ computational_findings = "\n\n".join(computation_results)
362
+ else:
363
+ # No tools used or tool calls failed, analyze if computation is needed
364
+ computational_findings = response.content if hasattr(response, 'content') else str(response)
365
+
366
+ # If the response looks like it should have used tools but didn't, try direct calculation
367
+ if any(op in user_query.lower() for op in ['+', '-', '*', '/', 'calculate', 'compute', 'multiply', 'add', 'subtract', 'divide']):
368
+ print("🔧 Attempting direct calculation...")
369
+
370
+ # Try to extract and solve simple mathematical expressions
371
+ import re
372
+
373
+ # Look for simple math expressions
374
+ math_patterns = [
375
+ r'(\d+)\s*\+\s*(\d+)', # addition
376
+ r'(\d+)\s*\*\s*(\d+)', # multiplication
377
+ r'(\d+)\s*-\s*(\d+)', # subtraction
378
+ r'(\d+)\s*/\s*(\d+)', # division
379
+ ]
380
+
381
+ for pattern in math_patterns:
382
+ matches = re.findall(pattern, user_query)
383
+ if matches:
384
+ for match in matches:
385
+ a, b = int(match[0]), int(match[1])
386
+ if '+' in user_query:
387
+ result = a + b
388
+ computational_findings += f"\n\nDirect calculation: {a} + {b} = {result}"
389
+ elif '*' in user_query:
390
+ result = a * b
391
+ computational_findings += f"\n\nDirect calculation: {a} × {b} = {result}"
392
+ elif '-' in user_query:
393
+ result = a - b
394
+ computational_findings += f"\n\nDirect calculation: {a} - {b} = {result}"
395
+ elif '/' in user_query:
396
+ result = a / b
397
+ computational_findings += f"\n\nDirect calculation: {a} ÷ {b} = {result}"
398
+ break
399
+
400
+ # Format computational results
401
+ formatted_results = f"""
402
+ ### Computational Analysis {state.get('loop_counter', 0) + 1}
403
+
404
+ {computational_findings}
405
+
406
+ ---
407
+ """
408
+
409
+ print(f"🧮 Code Agent: Generated {len(formatted_results)} characters of computational results")
410
+
411
+ # Update span with results if available
412
+ if span:
413
+ span.update_trace(metadata={
414
+ "computation_length": len(formatted_results),
415
+ "results_preview": formatted_results[:300] + "..."
416
+ })
417
+
418
+ # Return command to go back to lead agent
419
+ return Command(
420
+ goto="lead",
421
+ update={
422
+ "code_outputs": state.get("code_outputs", "") + formatted_results
423
+ }
424
+ )
425
+
426
+ except Exception as e:
427
+ print(f"❌ Code Agent Error: {e}")
428
+
429
+ # Return with error information
430
+ error_result = f"""
431
+ ### Computational Error
432
+ An error occurred during code execution: {str(e)}
433
+
434
+ """
435
+ return Command(
436
+ goto="lead",
437
+ update={
438
+ "code_outputs": state.get("code_outputs", "") + error_result
439
+ }
440
+ )
agents/lead_agent.py ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Lead Agent - Orchestrates the multi-agent workflow
3
+
4
+ The Lead Agent is responsible for:
5
+ 1. Analyzing user queries and determining next steps
6
+ 2. Managing the iterative research/code loop
7
+ 3. Deciding when enough information has been gathered
8
+ 4. Coordinating between specialized agents
9
+ 5. Maintaining the overall workflow state
10
+ """
11
+
12
+ import os
13
+ from typing import Dict, Any, Literal
14
+ from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage, AIMessage
15
+ from langgraph.types import Command
16
+ from langchain_groq import ChatGroq
17
+ from observability import agent_span
18
+ from dotenv import load_dotenv
19
+
20
+ # Import memory system
21
+ from memory_system import MemoryManager
22
+
23
+ load_dotenv("env.local")
24
+
25
+ # Initialize memory manager
26
+ memory_manager = MemoryManager()
27
+
28
+ def load_system_prompt() -> str:
29
+ """Load the system prompt for the lead agent"""
30
+ try:
31
+ with open("archive/prompts/system_prompt.txt", "r") as f:
32
+ base_prompt = f.read()
33
+
34
+ lead_prompt = f"""
35
+ {base_prompt}
36
+
37
+ As the Lead Agent, you coordinate a team of specialists:
38
+ - Research Agent: Gathers information from web, papers, and knowledge bases
39
+ - Code Agent: Performs calculations and executes Python code
40
+
41
+ Your responsibilities:
42
+ 1. Analyze the user's question to determine what information and computations are needed
43
+ 2. Decide whether to delegate to research, code, both, or proceed to final answer
44
+ 3. Synthesize results from specialists into a coherent draft answer
45
+ 4. Determine when sufficient information has been gathered
46
+
47
+ Decision criteria:
48
+ - If the question requires factual information, current events, or research → delegate to research
49
+ - If the question requires calculations, data analysis, or code execution → delegate to code
50
+ - If you have sufficient information to answer → proceed to formatting
51
+ - Maximum 3 iterations to prevent infinite loops
52
+
53
+ Always maintain the exact formatting requirements specified in the system prompt.
54
+ """
55
+ return lead_prompt
56
+ except FileNotFoundError:
57
+ return """You are a helpful assistant coordinating a team of specialists to answer questions accurately."""
58
+
59
+
60
+ def lead_agent(state: Dict[str, Any]) -> Command[Literal["research", "code", "formatter", "__end__"]]:
61
+ """
62
+ Lead Agent node that orchestrates the workflow.
63
+
64
+ Makes decisions about:
65
+ - Whether more research is needed
66
+ - Whether code execution is needed
67
+ - When to proceed to final formatting
68
+ - When the loop should terminate
69
+
70
+ Returns Command with routing decision and state updates.
71
+ """
72
+
73
+ loop_counter = state.get('loop_counter', 0)
74
+ max_iterations = state.get('max_iterations', 3)
75
+
76
+ print(f"🎯 Lead Agent: Processing request (iteration {loop_counter})")
77
+
78
+ # Check for termination conditions first
79
+ if loop_counter >= max_iterations:
80
+ print("🔄 Maximum iterations reached, proceeding to formatter")
81
+ return Command(
82
+ goto="formatter",
83
+ update={
84
+ "loop_counter": loop_counter + 1,
85
+ "next": "formatter"
86
+ }
87
+ )
88
+
89
+ try:
90
+ # Get the system prompt
91
+ system_prompt = load_system_prompt()
92
+
93
+ # Initialize LLM
94
+ llm = ChatGroq(
95
+ model="llama-3.3-70b-versatile",
96
+ temperature=0.1, # Low temperature for consistent routing decisions
97
+ max_tokens=1024
98
+ )
99
+
100
+ # Create agent span for tracing
101
+ with agent_span(
102
+ "lead",
103
+ metadata={
104
+ "loop_counter": loop_counter,
105
+ "research_notes_length": len(state.get("research_notes", "")),
106
+ "code_outputs_length": len(state.get("code_outputs", "")),
107
+ "user_id": state.get("user_id", "unknown"),
108
+ "session_id": state.get("session_id", "unknown")
109
+ }
110
+ ) as span:
111
+
112
+ # Build context for decision making
113
+ messages = state.get("messages", [])
114
+ research_notes = state.get("research_notes", "")
115
+ code_outputs = state.get("code_outputs", "")
116
+
117
+ # Get the original user query
118
+ user_query = ""
119
+ for msg in messages:
120
+ if isinstance(msg, HumanMessage):
121
+ user_query = msg.content
122
+ break
123
+
124
+ # Check for similar questions in memory
125
+ similar_context = ""
126
+ if user_query:
127
+ try:
128
+ similar_qa = memory_manager.get_similar_qa(user_query)
129
+ if similar_qa:
130
+ similar_context = f"\n\nSimilar previous Q&A:\n{similar_qa}"
131
+ except Exception as e:
132
+ print(f"💾 Memory cache hit") # Simplified message
133
+
134
+ # Build decision prompt
135
+ decision_prompt = f"""
136
+ Based on the user's question and current progress, decide the next action.
137
+
138
+ Original Question: {user_query}
139
+
140
+ Current Progress:
141
+ - Loop iteration: {loop_counter}
142
+ - Research gathered: {len(research_notes)} characters
143
+ - Code outputs: {len(code_outputs)} characters
144
+
145
+ Research Notes So Far:
146
+ {research_notes if research_notes else "None yet"}
147
+
148
+ Code Outputs So Far:
149
+ {code_outputs if code_outputs else "None yet"}
150
+
151
+ {similar_context}
152
+
153
+ Analyze what's still needed:
154
+ 1. Is factual information, current events, or research missing? → route to "research"
155
+ 2. Are calculations, data analysis, or code execution needed? → route to "code"
156
+ 3. Do we have sufficient information to provide a complete answer? → route to "formatter"
157
+
158
+ Respond with ONLY one of: research, code, formatter
159
+ """
160
+
161
+ # Get decision from LLM
162
+ decision_messages = [
163
+ SystemMessage(content=system_prompt),
164
+ HumanMessage(content=decision_prompt)
165
+ ]
166
+
167
+ response = llm.invoke(decision_messages)
168
+ decision = response.content.strip().lower()
169
+
170
+ # Validate decision
171
+ valid_decisions = ["research", "code", "formatter"]
172
+ if decision not in valid_decisions:
173
+ print(f"⚠️ Invalid decision '{decision}', defaulting to 'research'")
174
+ decision = "research"
175
+
176
+ # Prepare state updates
177
+ updates = {
178
+ "loop_counter": loop_counter + 1,
179
+ "next": decision
180
+ }
181
+
182
+ # If we're done, create draft answer
183
+ if decision == "formatter":
184
+ # Create a comprehensive draft answer from gathered information
185
+ draft_prompt = f"""
186
+ Create a comprehensive answer based on all gathered information:
187
+
188
+ Original Question: {user_query}
189
+
190
+ Research Information:
191
+ {research_notes}
192
+
193
+ Code Results:
194
+ {code_outputs}
195
+
196
+ Instructions:
197
+ 1. Synthesize all available information to answer the question
198
+ 2. If computational results are available, include them
199
+ 3. If research provides context, incorporate it
200
+ 4. Provide a clear, direct answer to the user's question
201
+ 5. Focus on accuracy and completeness
202
+
203
+ What is your answer to the user's question?
204
+ """
205
+
206
+ draft_messages = [
207
+ SystemMessage(content=system_prompt),
208
+ HumanMessage(content=draft_prompt)
209
+ ]
210
+
211
+ try:
212
+ draft_response = llm.invoke(draft_messages)
213
+ draft_content = draft_response.content if hasattr(draft_response, 'content') else str(draft_response)
214
+ updates["draft_answer"] = draft_content
215
+ print(f"📝 Lead Agent: Created draft answer ({len(draft_content)} characters)")
216
+ except Exception as e:
217
+ print(f"⚠️ Error creating draft answer: {e}")
218
+ # Fallback - create a simple answer from available data
219
+ fallback_answer = f"Based on the available information:\n\nResearch: {research_notes}\nCalculations: {code_outputs}"
220
+ updates["draft_answer"] = fallback_answer
221
+
222
+ # Log decision
223
+ print(f"🎯 Lead Agent Decision: {decision} (iteration {loop_counter + 1})")
224
+
225
+ if span:
226
+ span.update_trace(output={"decision": decision, "updates": updates})
227
+
228
+ return Command(
229
+ goto=decision,
230
+ update=updates
231
+ )
232
+
233
+ except Exception as e:
234
+ print(f"❌ Lead Agent Error: {e}")
235
+ # On error, proceed to formatter with error message
236
+ return Command(
237
+ goto="formatter",
238
+ update={
239
+ "draft_answer": f"I encountered an error while processing your request: {str(e)}",
240
+ "loop_counter": loop_counter + 1,
241
+ "next": "formatter"
242
+ }
243
+ )
agents/research_agent.py ADDED
@@ -0,0 +1,295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Research Agent - Information gathering and research tasks
3
+
4
+ The Research Agent is responsible for:
5
+ 1. Gathering information from multiple sources (web, Wikipedia, arXiv)
6
+ 2. Searching for relevant context and facts
7
+ 3. Compiling research results in a structured format
8
+ 4. Returning citations and source information
9
+ """
10
+
11
+ import os
12
+ from typing import Dict, Any, List
13
+ from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage, AIMessage
14
+ from langgraph.types import Command
15
+ from langchain_groq import ChatGroq
16
+ from langchain_core.tools import Tool
17
+ from observability import agent_span, tool_span
18
+ from dotenv import load_dotenv
19
+
20
+ # Import tools from the existing tools.py
21
+ from tools import (
22
+ get_tavily_tool,
23
+ get_wikipedia_tool,
24
+ get_arxiv_tool,
25
+ get_wikipedia_reader,
26
+ get_arxiv_reader
27
+ )
28
+
29
+ load_dotenv("env.local")
30
+
31
+
32
+ def create_research_tools() -> List[Tool]:
33
+ """Create LangChain-compatible research tools"""
34
+ tools = []
35
+
36
+ try:
37
+ # Import LlamaIndex tools and convert them
38
+ from tools import get_tavily_tool, get_wikipedia_tool, get_arxiv_tool
39
+
40
+ # Tavily web search
41
+ try:
42
+ tavily_spec = get_tavily_tool()
43
+ if tavily_spec:
44
+ # Convert to LangChain tool
45
+ def tavily_search(query: str) -> str:
46
+ try:
47
+ tavily_tools = tavily_spec.to_tool_list()
48
+ if tavily_tools:
49
+ result = tavily_tools[0].call({"input": query})
50
+ return str(result)
51
+ except Exception as e:
52
+ return f"Search error: {str(e)}"
53
+ return "No search results found"
54
+
55
+ tavily_tool = Tool(
56
+ name="web_search",
57
+ description="Search the web for current information and facts using Tavily API",
58
+ func=tavily_search
59
+ )
60
+ tools.append(tavily_tool)
61
+ print(f"✅ Added Tavily web search tool")
62
+ except Exception as e:
63
+ print(f"⚠️ Could not load Tavily tools: {e}")
64
+
65
+ # Wikipedia search
66
+ try:
67
+ wikipedia_tool = get_wikipedia_tool()
68
+ if wikipedia_tool:
69
+ def wikipedia_search(query: str) -> str:
70
+ try:
71
+ result = wikipedia_tool.call({"input": query})
72
+ return str(result)
73
+ except Exception as e:
74
+ return f"Wikipedia search error: {str(e)}"
75
+
76
+ wiki_tool = Tool(
77
+ name="wikipedia_search",
78
+ description="Search Wikipedia for encyclopedic information",
79
+ func=wikipedia_search
80
+ )
81
+ tools.append(wiki_tool)
82
+ print("✅ Added Wikipedia tool")
83
+ except Exception as e:
84
+ print(f"⚠️ Could not load Wikipedia tool: {e}")
85
+
86
+ # ArXiv search
87
+ try:
88
+ arxiv_tool = get_arxiv_tool()
89
+ if arxiv_tool:
90
+ def arxiv_search(query: str) -> str:
91
+ try:
92
+ result = arxiv_tool.call({"input": query})
93
+ return str(result)
94
+ except Exception as e:
95
+ return f"ArXiv search error: {str(e)}"
96
+
97
+ arxiv_lc_tool = Tool(
98
+ name="arxiv_search",
99
+ description="Search ArXiv for academic papers and research",
100
+ func=arxiv_search
101
+ )
102
+ tools.append(arxiv_lc_tool)
103
+ print("✅ Added ArXiv tool")
104
+ except Exception as e:
105
+ print(f"⚠️ Could not load ArXiv tool: {e}")
106
+
107
+ except Exception as e:
108
+ print(f"⚠️ Error setting up research tools: {e}")
109
+
110
+ print(f"🔧 Research Agent loaded {len(tools)} tools")
111
+ return tools
112
+
113
+
114
+ def load_research_prompt() -> str:
115
+ """Load the research-specific prompt"""
116
+ try:
117
+ with open("archive/prompts/retrieval_prompt.txt", "r") as f:
118
+ return f.read()
119
+ except FileNotFoundError:
120
+ return """
121
+ You are a research specialist focused on gathering accurate information.
122
+
123
+ Your goals:
124
+ 1. Search for factual, current, and relevant information
125
+ 2. Use multiple sources to verify facts
126
+ 3. Provide clear citations and sources
127
+ 4. Structure findings in an organized manner
128
+
129
+ When researching:
130
+ - Use web search for current information and facts
131
+ - Use Wikipedia for encyclopedic knowledge
132
+ - Use ArXiv for academic and technical topics
133
+ - Cross-reference information across sources
134
+ - Note any conflicting information found
135
+
136
+ Format your response as:
137
+ ### Research Results
138
+ - **Source 1**: [findings]
139
+ - **Source 2**: [findings]
140
+ - **Source 3**: [findings]
141
+
142
+ ### Key Facts
143
+ - Fact 1
144
+ - Fact 2
145
+ - Fact 3
146
+
147
+ ### Citations
148
+ - Citation 1
149
+ - Citation 2
150
+ """
151
+
152
+
153
+ def research_agent(state: Dict[str, Any]) -> Command:
154
+ """
155
+ Research Agent node that gathers information using available tools.
156
+
157
+ Returns Command with research results appended to research_notes.
158
+ """
159
+
160
+ print("🔍 Research Agent: Gathering information...")
161
+
162
+ try:
163
+ # Get research prompt
164
+ research_prompt = load_research_prompt()
165
+
166
+ # Initialize LLM with tools
167
+ llm = ChatGroq(
168
+ model="llama-3.3-70b-versatile",
169
+ temperature=0.3, # Slightly higher for research creativity
170
+ max_tokens=2048
171
+ )
172
+
173
+ # Get research tools
174
+ tools = create_research_tools()
175
+
176
+ # Bind tools to LLM if available
177
+ if tools:
178
+ llm_with_tools = llm.bind_tools(tools)
179
+ else:
180
+ llm_with_tools = llm
181
+ print("⚠️ No tools available, proceeding with LLM only")
182
+
183
+ # Create agent span for tracing
184
+ with agent_span(
185
+ "research",
186
+ metadata={
187
+ "tools_available": len(tools),
188
+ "user_id": state.get("user_id", "unknown"),
189
+ "session_id": state.get("session_id", "unknown")
190
+ }
191
+ ) as span:
192
+
193
+ # Extract user query
194
+ messages = state.get("messages", [])
195
+ user_query = ""
196
+ for msg in messages:
197
+ if isinstance(msg, HumanMessage):
198
+ user_query = msg.content
199
+ break
200
+
201
+ # Build research request
202
+ research_request = f"""
203
+ Please research the following question using available tools:
204
+
205
+ Question: {user_query}
206
+
207
+ Current research status: {len(state.get('research_notes', ''))} characters already gathered
208
+
209
+ Instructions:
210
+ 1. Search for factual information relevant to the question
211
+ 2. Use multiple sources if possible for verification
212
+ 3. Focus on accuracy and currency of information
213
+ 4. Provide clear citations and sources
214
+ 5. Structure your findings clearly
215
+
216
+ Please gather comprehensive information to help answer this question.
217
+ """
218
+
219
+ # Create messages for research
220
+ research_messages = [
221
+ SystemMessage(content=research_prompt),
222
+ HumanMessage(content=research_request)
223
+ ]
224
+
225
+ # Get research response
226
+ if tools:
227
+ # Try using tools for research
228
+ response = llm_with_tools.invoke(research_messages)
229
+
230
+ # If the response contains tool calls, execute them
231
+ if hasattr(response, 'tool_calls') and response.tool_calls:
232
+ print(f"🛠️ Executing {len(response.tool_calls)} tool calls")
233
+
234
+ # Execute tool calls and collect results
235
+ tool_results = []
236
+ for tool_call in response.tool_calls:
237
+ try:
238
+ # Find the tool
239
+ tool = next((t for t in tools if t.name == tool_call['name']), None)
240
+ if tool:
241
+ result = tool.run(tool_call['args'])
242
+ tool_results.append(f"**{tool.name}**: {result}")
243
+ except Exception as e:
244
+ print(f"⚠️ Tool {tool_call['name']} failed: {e}")
245
+ tool_results.append(f"**{tool_call['name']}**: Error - {str(e)}")
246
+
247
+ # Compile research results
248
+ research_findings = "\n\n".join(tool_results) if tool_results else response.content
249
+ else:
250
+ research_findings = response.content
251
+ else:
252
+ # No tools available, use LLM knowledge only
253
+ research_findings = llm.invoke(research_messages).content
254
+
255
+ # Format research results
256
+ formatted_results = f"""
257
+ ### Research Iteration {state.get('loop_counter', 0) + 1}
258
+
259
+ {research_findings}
260
+
261
+ ---
262
+ """
263
+
264
+ print(f"📝 Research Agent: Gathered {len(formatted_results)} characters")
265
+
266
+ # Update trace
267
+ if span:
268
+ span.update_trace(output={
269
+ "research_length": len(formatted_results),
270
+ "findings_preview": formatted_results[:300] + "..."
271
+ })
272
+
273
+ # Return command to proceed back to lead agent for decision
274
+ return Command(
275
+ goto="lead",
276
+ update={
277
+ "research_notes": formatted_results
278
+ }
279
+ )
280
+
281
+ except Exception as e:
282
+ print(f"❌ Research Agent Error: {e}")
283
+
284
+ # Return with error information
285
+ error_result = f"""
286
+ ### Research Error
287
+ An error occurred during research: {str(e)}
288
+
289
+ """
290
+ return Command(
291
+ goto="lead",
292
+ update={
293
+ "research_notes": error_result
294
+ }
295
+ )
archive/.cursor/rules/archive.mdc ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ ---
2
+ description:
3
+ globs: *.py,*.md
4
+ alwaysApply: false
5
+ ---
6
+ don't make any changes to these files, the files in this folder is only for reference, you can copy the file and then make changes to it, but never makes changes to the files in this folder. use the code for reference only.
ARCHITECTURE.md → archive/ARCHITECTURE.md RENAMED
File without changes
archive/README.md ADDED
@@ -0,0 +1 @@
 
 
1
+ This folder contains previous attempts at agent creation which is not the correct workflow to create agents based on blogpost of https://www.anthropic.com/engineering/built-multi-agent-research-system and https://cognition.ai/blog/dont-build-multi-agents
{prompts → archive/prompts}/critic_prompt.txt RENAMED
File without changes
{prompts → archive/prompts}/execution_prompt.txt RENAMED
File without changes
{prompts → archive/prompts}/retrieval_prompt.txt RENAMED
File without changes
{prompts → archive/prompts}/router_prompt.txt RENAMED
File without changes
{prompts → archive/prompts}/system_prompt.txt RENAMED
File without changes
{prompts → archive/prompts}/verification_prompt.txt RENAMED
File without changes
{src → archive/src}/__init__.py RENAMED
File without changes
{src → archive/src}/__pycache__/__init__.cpython-313.pyc RENAMED
File without changes
{src → archive/src}/__pycache__/langgraph_system.cpython-313.pyc RENAMED
Binary files a/src/__pycache__/langgraph_system.cpython-313.pyc and b/archive/src/__pycache__/langgraph_system.cpython-313.pyc differ
 
{src → archive/src}/__pycache__/memory.cpython-313.pyc RENAMED
Binary files a/src/__pycache__/memory.cpython-313.pyc and b/archive/src/__pycache__/memory.cpython-313.pyc differ
 
{src → archive/src}/__pycache__/tracing.cpython-313.pyc RENAMED
Binary files a/src/__pycache__/tracing.cpython-313.pyc and b/archive/src/__pycache__/tracing.cpython-313.pyc differ
 
{src → archive/src}/agents/__init__.py RENAMED
File without changes
{src → archive/src}/agents/__pycache__/__init__.cpython-313.pyc RENAMED
Binary files a/src/agents/__pycache__/__init__.cpython-313.pyc and b/archive/src/agents/__pycache__/__init__.cpython-313.pyc differ
 
{src → archive/src}/agents/__pycache__/critic_agent.cpython-313.pyc RENAMED
Binary files a/src/agents/__pycache__/critic_agent.cpython-313.pyc and b/archive/src/agents/__pycache__/critic_agent.cpython-313.pyc differ
 
{src → archive/src}/agents/__pycache__/execution_agent.cpython-313.pyc RENAMED
Binary files a/src/agents/__pycache__/execution_agent.cpython-313.pyc and b/archive/src/agents/__pycache__/execution_agent.cpython-313.pyc differ
 
{src → archive/src}/agents/__pycache__/plan_node.cpython-313.pyc RENAMED
Binary files a/src/agents/__pycache__/plan_node.cpython-313.pyc and b/archive/src/agents/__pycache__/plan_node.cpython-313.pyc differ
 
{src → archive/src}/agents/__pycache__/retrieval_agent.cpython-313.pyc RENAMED
Binary files a/src/agents/__pycache__/retrieval_agent.cpython-313.pyc and b/archive/src/agents/__pycache__/retrieval_agent.cpython-313.pyc differ
 
{src → archive/src}/agents/__pycache__/router_node.cpython-313.pyc RENAMED
Binary files a/src/agents/__pycache__/router_node.cpython-313.pyc and b/archive/src/agents/__pycache__/router_node.cpython-313.pyc differ
 
{src → archive/src}/agents/__pycache__/verification_node.cpython-313.pyc RENAMED
Binary files a/src/agents/__pycache__/verification_node.cpython-313.pyc and b/archive/src/agents/__pycache__/verification_node.cpython-313.pyc differ
 
{src → archive/src}/agents/critic_agent.py RENAMED
File without changes
{src → archive/src}/agents/execution_agent.py RENAMED
File without changes
{src → archive/src}/agents/plan_node.py RENAMED
File without changes
{src → archive/src}/agents/retrieval_agent.py RENAMED
File without changes
{src → archive/src}/agents/router_node.py RENAMED
File without changes
{src → archive/src}/agents/verification_node.py RENAMED
File without changes