{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Context Editing & Memory for Long-Running Agents\n",
    "\n",
    "AI agents that run across multiple sessions or handle long-running tasks face two key challenges: they lose learned patterns between conversations, and context windows fill up during extended interactions.\n",
    "\n",
    "This cookbook demonstrates how to address these challenges using Claude's memory tool and context editing capabilities."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Table of Contents\n",
    "\n",
    "1. [Introduction: Why Memory Matters](#introduction)\n",
    "2. [Use Cases](#use-cases)\n",
    "3. [Quick Start Examples](#quick-start)\n",
    "4. [How It Works](#how-it-works)\n",
    "5. [Code Review Assistant Demo](#demo)\n",
    "6. [Real-World Applications](#real-world)\n",
    "7. [Best Practices](#best-practices)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "\n",
    "**Required Knowledge:**\n",
    "- Python fundamentals (functions, classes, async/await basics)\n",
    "- Basic understanding of REST APIs and JSON\n",
    "\n",
    "**Required Tools:**\n",
    "- Python 3.10 or higher\n",
    "- Anthropic API key ([get one here](https://console.anthropic.com/))\n",
    "\n",
    "**Recommended:**\n",
    "- Familiarity with concurrent programming concepts (threads, async)\n",
    "- Basic understanding of context windows in LLMs\n",
    "\n",
    "## Setup\n",
    "\n",
    "### For VSCode Users\n",
    "\n",
    "```bash\n",
    "# 1. Create virtual environment\n",
    "python -m venv .venv\n",
    "\n",
    "# 2. Activate it\n",
    "source .venv/bin/activate  # macOS/Linux\n",
    "# or: .venv\\Scripts\\activate  # Windows\n",
    "\n",
    "# 3. Install dependencies\n",
    "pip install -r requirements.txt\n",
    "\n",
    "# 4. In VSCode: Select .venv as kernel (top right)\n",
    "```\n",
    "\n",
    "### API Key\n",
    "\n",
    "```bash\n",
    "cp .env.example .env\n",
    "# Edit .env and add your ANTHROPIC_API_KEY\n",
    "```\n",
    "\n",
    "Get your API key from: https://console.anthropic.com/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": "## 1. Introduction: Why Memory Matters {#introduction}\n\nThis cookbook demonstrates practical implementations of the context engineering patterns described in [Effective context engineering for AI agents](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents). That post covers why context is a finite resource, how attention budgets work, and strategies for building effective agents—the techniques you'll see in action here.\n\n### The Problem\n\nLarge language models have finite context windows (200k tokens for the Claude 4 family of models). While this seems large, several challenges emerge:\n\n- **Context limits**: Long conversations or complex tasks can exceed available context\n- **Computational cost**: Processing large contexts is expensive - attention mechanisms scale quadratically\n- **Repeated patterns**: Similar tasks across conversations require re-explaining context every time\n- **Information loss**: When context fills up, earlier important information gets lost\n\n### The Solution\n\nClaude 4 models introduce powerful context management capabilities:\n\n1. **Memory Tool** (`memory_20250818`): Enables cross-conversation learning\n   - Claude can write down what it learns for future reference\n   - File-based system under `/memories` directory\n   - Client-side implementation gives you full control\n\n2. **Context Editing**: Automatically manages context with two strategies:\n   - **Tool use clearing** (`clear_tool_uses_20250919`): Clears old tool results when context grows large\n   - **Thinking management** (`clear_thinking_20251015`): Manages extended thinking blocks (requires thinking enabled)\n   - Configurable triggers and retention policies\n\n### The Benefit\n\nBuild AI agents that **get better at your specific tasks over time**:\n\n- **Session 1**: Claude solves a problem, writes down the pattern\n- **Session 2**: Claude applies the learned pattern immediately (faster!)\n- **Long sessions**: Context editing keeps conversations manageable\n\nThink of it as giving Claude a notebook to take notes and refer back to - just like humans do.\n\n### What You'll Learn\n\nBy the end of this cookbook, you will be able to:\n- **Implement** the memory tool for cross-conversation learning\n- **Configure** context editing to manage long-running sessions\n- **Apply** best practices for memory security and organization"
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Introduction: Why Memory Matters {#introduction}\n",
    "\n",
    "This cookbook demonstrates practical implementations of the context engineering patterns described in [Effective context engineering for AI agents](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents). That post covers why context is a finite resource, how attention budgets work, and strategies for building effective agents—the techniques you'll see in action here.\n",
    "\n",
    "### The Problem\n",
    "\n",
    "Large language models have finite context windows (200k tokens for Claude 4). While this seems large, several challenges emerge:\n",
    "\n",
    "- **Context limits**: Long conversations or complex tasks can exceed available context\n",
    "- **Computational cost**: Processing large contexts is expensive - attention mechanisms scale quadratically\n",
    "- **Repeated patterns**: Similar tasks across conversations require re-explaining context every time\n",
    "- **Information loss**: When context fills up, earlier important information gets lost\n",
    "\n",
    "### The Solution\n",
    "\n",
    "Claude Sonnet 4.5 introduces two powerful capabilities:\n",
    "\n",
    "1. **Memory Tool** (`memory_20250818`): Enables cross-conversation learning\n",
    "   - Claude can write down what it learns for future reference\n",
    "   - File-based system under `/memories` directory\n",
    "   - Client-side implementation gives you full control\n",
    "\n",
    "**Supported Models**: Claude Opus 4.1 (`claude-opus-4-1`), Claude Opus 4 (`claude-opus-4`), Claude Sonnet 4.5 (`claude-sonnet-4-5`), Claude Sonnet 4 (`claude-sonnet-4`), and Claude Haiku 4.5 (`claude-haiku-4-5`)\n",
    "\n",
    "### The Benefit\n",
    "\n",
    "Build AI agents that **get better at your specific tasks over time**:\n",
    "\n",
    "- **Session 1**: Claude solves a problem, writes down the pattern\n",
    "- **Session 2**: Claude applies the learned pattern immediately (faster!)\n",
    "- **Long sessions**: Context editing keeps conversations manageable\n",
    "\n",
    "Think of it as giving Claude a notebook to take notes and refer back to - just like humans do."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": "## 2. Use Cases {#use-cases}\n\nMemory and context management enable powerful new workflows:\n\n### 🔍 Code Review Assistant\n- Learns debugging patterns from past reviews\n- Recognizes similar bugs instantly in future sessions\n- Builds team-specific code quality knowledge\n- **Production ready**: Integrate with [claude-code-action](https://github.com/anthropics/claude-code-action) for GitHub PR reviews\n\n### 📚 Research Assistant\n- Accumulates knowledge on topics over multiple sessions\n- Connects insights across different research threads\n- Maintains bibliography and source tracking\n\n### 💬 Customer Support Bot\n- Learns user preferences and communication style\n- Remembers common issues and solutions\n- Builds product knowledge base from interactions\n\n### 📊 Data Analysis Helper\n- Remembers dataset patterns and anomalies\n- Stores analysis techniques that work well\n- Builds domain-specific insights over time\n\n**Supported Models**: Claude Opus 4.1 (`claude-opus-4-1`) and Claude Sonnet 4.5 (`claude-sonnet-4-5`)\n\n**This cookbook focuses on the Code Review Assistant** as it clearly demonstrates both memory (learning patterns) and context editing (handling long reviews)."
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Quick Start Examples {#quick-start}\n",
    "\n",
    "Let's see memory and context management in action with simple examples."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Setup\n",
    "\n",
    "First, install dependencies and configure your environment:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-11-15T17:11:26.527705Z",
     "iopub.status.busy": "2025-11-15T17:11:26.527494Z",
     "iopub.status.idle": "2025-11-15T17:11:30.063615Z",
     "shell.execute_reply": "2025-11-15T17:11:30.063106Z"
    },
    "vscode": {
     "languageId": "python"
    }
   },
   "outputs": [],
   "source": [
    "%%capture\n",
    "# Install required packages\n",
    "# Option 1: From requirements.txt\n",
    "# %pip install -q -r requirements.txt\n",
    "\n",
    "# Option 2: Direct install\n",
    "%pip install -q anthropic python-dotenv ipykernel"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**⚠️ Important**: Create a `.env` file in this directory:\n",
    "\n",
    "```bash\n",
    "# Copy .env.example to .env and add your API key\n",
    "cp .env.example .env\n",
    "```\n",
    "\n",
    "Then edit `.env` to add your Anthropic API key from https://console.anthropic.com/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✓ API key loaded\n",
      "✓ Using model: claude-sonnet-4-5\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "from typing import cast\n",
    "\n",
    "from anthropic import Anthropic\n",
    "from dotenv import load_dotenv\n",
    "\n",
    "# Load environment variables\n",
    "load_dotenv()\n",
    "\n",
    "# Model configuration - use alias for automatic updates\n",
    "MODEL = \"claude-sonnet-4-5\"  # Can override via ANTHROPIC_MODEL env var\n",
    "if os.getenv(\"ANTHROPIC_MODEL\"):\n",
    "    MODEL = os.getenv(\"ANTHROPIC_MODEL\")\n",
    "\n",
    "if not API_KEY:\n",
    "    raise ValueError(\"ANTHROPIC_API_KEY not found. Copy .env.example to .env and add your API key.\")\n",
    "\n",
    "if not MODEL:\n",
    "    raise ValueError(\"ANTHROPIC_MODEL not found. Copy .env.example to .env and set the model.\")\n",
    "\n",
    "MODEL = cast(str, MODEL)\n",
    "\n",
    "client = Anthropic(api_key=API_KEY)\n",
    "\n",
    "print(\"✓ API key loaded\")\n",
    "print(f\"✓ Using model: {MODEL}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Example 1: Basic Memory Usage\n",
    "\n",
    "Let's see Claude use memory to store information for future reference."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Helper Functions**\n",
    "\n",
    "These examples use helper functions from `demo_helpers.py`:\n",
    "\n",
    "- **`run_conversation_loop()`**: Handles the API conversation loop\n",
    "  - Calls Claude's API with memory tool enabled\n",
    "  - Executes tool uses (memory operations)\n",
    "  - Continues until Claude stops using tools\n",
    "  - Returns the final response\n",
    "\n",
    "- **`run_conversation_turn()`**: Single turn (used in Example 3)\n",
    "  - Same as above but returns after one API call\n",
    "  - Useful when you need fine-grained control\n",
    "\n",
    "- **`print_context_management_info()`**: Displays context clearing stats\n",
    "  - Shows tokens saved, tool uses cleared\n",
    "  - Helps visualize when context editing triggers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**⚠️ Note on Memory Clearing**\n",
    "\n",
    "The following cell clears all memory files to provide a clean slate for this demonstration. This is useful for running the notebook multiple times to see consistent results.\n",
    "\n",
    "**In production applications**, you should carefully consider whether to clear all memory, as it permanently removes learned patterns. Consider using selective deletion or organizing memory into project-specific directories instead."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-11-15T17:11:30.331182Z",
     "iopub.status.busy": "2025-11-15T17:11:30.331084Z",
     "iopub.status.idle": "2025-11-15T17:12:09.831980Z",
     "shell.execute_reply": "2025-11-15T17:12:09.831417Z"
    },
    "vscode": {
     "languageId": "python"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🧹 Clearing previous memories...\n",
      "✓ Memory cleared\n",
      "\n",
      "============================================================\n",
      "📝 SESSION 1: Learning from a bug\n",
      "============================================================\n",
      "\n",
      "🔄 Turn 1:\n",
      "💬 Claude: I'll review this code for the race condition issue. Let me first check my memory, then analyze the problem.\n",
      "\n",
      "  🔧 Memory tool: view /memories\n",
      "  ✓ Result: Directory: /memories\n",
      "(empty)\n",
      "\n",
      "🔄 Turn 2:\n",
      "  🔧 Memory tool: create /memories/review.md\n",
      "  ✓ Result: File created successfully at /memories/review.md\n",
      "\n",
      "🔄 Turn 3:\n",
      "💬 Claude: ## Code Review: Multi-threaded Web Scraper Race Condition\n",
      "\n",
      "### 🔴 **Critical Issue Found: Race Condition in Shared State**\n",
      "\n",
      "You've correctly identified the problem! The code has **race conditions** caused by multiple threads modifying shared lists without synchronization.\n",
      "\n",
      "---\n",
      "\n",
      "### **The Problem**\n",
      "\n",
      "**Lines with race conditions:**\n",
      "```python\n",
      "self.results.append(result)  # RACE CONDITION\n",
      "```\n",
      "\n",
      "**Problem:** \n",
      "- Python's `list.append()` is **NOT thread-safe** for concurrent modifications\n",
      "- Multiple threads simultaneously appending can cause:\n",
      "  - **Lost updates:** One thread's append overwrites another's\n",
      "  - **Inconsistent state:** Internal list structure can be corrupted\n",
      "  - **Unpredictable counts:** Results randomly dropped\n",
      "\n",
      "**Why it happens:**\n",
      "`list.append()` involves multiple operations:\n",
      "1. Read current list size\n",
      "2. Allocate space\n",
      "3. Insert item\n",
      "4. Update size\n",
      "\n",
      "# In scrape_urls():\n",
      "if \"error\" in result:\n",
      "    self.failed_urls.append(result[\"url\"])  # ⚠️ NOT THREAD-SAFE\n",
      "else:\n",
      "    self.results.append(result)  # ⚠️ NOT THREAD-SAFE\n",
      "```\n",
      "\n",
      "**Why this causes lost results:**\n",
      "- Python's `list.append()` is **not atomic** in all scenarios\n",
      "- When multiple threads call `append()` simultaneously, the internal list operations can interleave\n",
      "- This can cause lost updates where one thread's append overwrites another's\n",
      "\n",
      "---\n",
      "\n",
      "### **Solutions**\n",
      "\n",
      "#### **Option 1: Use Thread Locks (Traditional approach)**\n",
      "```python\n",
      "import threading\n",
      "\n",
      "class WebScraper:\n",
      "    \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n",
      "\n",
      "    def __init__(self, max_workers: int = 10):\n",
      "        self.max_workers = max_workers\n",
      "        self.results = []\n",
      "        self.failed_urls = []\n",
      "        self.lock = threading.Lock()  # ✅ Add lock\n",
      "\n",
      "    def scrape_urls(self, urls: List[str]) -> List[Dict[str, Any]]:\n",
      "        with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n",
      "            futures = [executor.submit(self.fetch_url, url) for url in urls]\n",
      "\n",
      "            for future in as_completed(futures):\n",
      "                result = future.result()\n",
      "\n",
      "                # ✅ Protect shared state with lock\n",
      "                with self.lock:\n",
      "                    if \"error\" in result:\n",
      "                        self.failed_urls.append(result[\"url\"])\n",
      "                    else:\n",
      "                        self.results.append(result)\n",
      "\n",
      "        return self.results\n",
      "```\n",
      "\n",
      "#### **Option 2: Use Thread-Safe Collections**\n",
      "```python\n",
      "from queue import Queue\n",
      "\n",
      "class WebScraper:\n",
      "    \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n",
      "\n",
      "    def __init__(self, max_workers: int = 10):\n",
      "        self.max_workers = max_workers\n",
      "        self.results = Queue()  # ✅ Thread-safe\n",
      "        self.failed_urls = Queue()  # ✅ Thread-safe\n",
      "\n",
      "    def scrape_urls(self, urls: List[str]) -> List[Dict[str, Any]]:\n",
      "        with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n",
      "            futures = [executor.submit(self.fetch_url, url) for url in urls]\n",
      "\n",
      "            for future in as_completed(futures):\n",
      "                result = future.result()\n",
      "\n",
      "                if \"error\" in result:\n",
      "                    self.failed_urls.put(result[\"url\"])\n",
      "                else:\n",
      "                    self.results.put(result)\n",
      "\n",
      "        # Convert Queue to list for return\n",
      "        return list(self.results.queue)\n",
      "```\n",
      "\n",
      "#### **Option 3: Collect Results from Futures (Best approach)**\n",
      "```python\n",
      "def scrape_urls(self, urls: List[str]) -> List[Dict[str, Any]]:\n",
      "    \"\"\"\n",
      "    ✅ FIXED: Collect results in main thread, avoiding shared state.\n",
      "    \"\"\"\n",
      "    results = []  # Local to main thread\n",
      "    failed_urls = []  # Local to main thread\n",
      "    \n",
      "    with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n",
      "        futures = [executor.submit(self.fetch_url, url) for url in urls]\n",
      "\n",
      "        for future in as_completed(futures):\n",
      "            result = future.result()\n",
      "\n",
      "            # ✅ Only main thread accesses these lists\n",
      "            if \"error\" in result:\n",
      "                failed_urls.append(result[\"url\"])\n",
      "            else:\n",
      "                results.append(result)\n",
      "\n",
      "    # Update instance variables once, in main thread\n",
      "    self.results = results\n",
      "    self.failed_urls = failed_urls\n",
      "    \n",
      "    return self.results\n",
      "```\n",
      "\n",
      "---\n",
      "\n",
      "### **Recommended Fix (Complete)**\n",
      "\n",
      "Here's the fully corrected version using **Option 3** (cleanest approach):\n",
      "\n",
      "```python\n",
      "\"\"\"\n",
      "Concurrent web scraper - FIXED VERSION\n",
      "Results collected in main thread to avoid race conditions.\n",
      "\"\"\"\n",
      "\n",
      "import time\n",
      "from concurrent.futures import ThreadPoolExecutor, as_completed\n",
      "from typing import List, Dict, Any\n",
      "\n",
      "import requests\n",
      "\n",
      "\n",
      "class WebScraper:\n",
      "    \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n",
      "\n",
      "    def __init__(self, max_workers: int = 10):\n",
      "        self.max_workers = max_workers\n",
      "        self.results = []\n",
      "        self.failed_urls = []\n",
      "\n",
      "    def fetch_url(self, url: str) -> Dict[str, Any]:\n",
      "        \"\"\"Fetch a single URL and return the result.\"\"\"\n",
      "        try:\n",
      "            response = requests.get(url, timeout=5)\n",
      "            response.raise_for_status()\n",
      "            return {\n",
      "                \"url\": url,\n",
      "                \"status\": response.status_code,\n",
      "                \"content_length\": len(response.content),\n",
      "            }\n",
      "        except requests.exceptions.RequestException as e:\n",
      "            return {\"url\": url, \"error\": str(e)}\n",
      "\n",
      "    def scrape_urls(self, urls: List[str]) -> List[Dict[str, Any]]:\n",
      "        \"\"\"\n",
      "        Scrape multiple URLs concurrently.\n",
      "        \n",
      "        ✅ FIXED: Results collected in main thread only.\n",
      "        \"\"\"\n",
      "        results = []  # ✅ Local variables - no race condition\n",
      "        failed_urls = []\n",
      "        \n",
      "        with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n",
      "            futures = [executor.submit(self.fetch_url, url) for url in urls]\n",
      "\n",
      "            for future in as_completed(futures):\n",
      "                result = future.result()\n",
      "\n",
      "                # ✅ Only the main thread modifies these lists\n",
      "                if \"error\" in result:\n",
      "                    failed_urls.append(result[\"url\"])\n",
      "                else:\n",
      "                    results.append(result)\n",
      "\n",
      "        # Update instance variables after all threads complete\n",
      "        self.results = results\n",
      "        self.failed_urls = failed_urls\n",
      "        \n",
      "        return self.results\n",
      "\n",
      "    def get_stats(self) -> Dict[str, int]:\n",
      "        \"\"\"Get scraping statistics.\"\"\"\n",
      "        return {\n",
      "            \"total_results\": len(self.results),\n",
      "            \"failed_urls\": len(self.failed_urls),\n",
      "            \"success_rate\": (\n",
      "                len(self.results) / (len(self.results) + len(self.failed_urls))\n",
      "                if (len(self.results) + len(self.failed_urls)) > 0\n",
      "                else 0\n",
      "            ),\n",
      "        }\n",
      "\n",
      "\n",
      "if __name__ == \"__main__\":\n",
      "    urls = [\n",
      "        \"https://httpbin.org/delay/1\",\n",
      "        \"https://httpbin.org/status/200\",\n",
      "        \"https://httpbin.org/status/404\",\n",
      "        \"https://httpbin.org/delay/2\",\n",
      "        \"https://httpbin.org/status/500\",\n",
      "    ] * 10  # 50 URLs total\n",
      "\n",
      "    scraper = WebScraper(max_workers=10)\n",
      "    results = scraper.scrape_urls(urls)\n",
      "\n",
      "    print(f\"Expected: 50 results\")\n",
      "    print(f\"Got: {len(results)} results\")  # ✅ Now always 50\n",
      "    print(f\"Stats: {scraper.get_stats()}\")\n",
      "```\n",
      "\n",
      "---\n",
      "\n",
      "### **Summary**\n",
      "\n",
      "- **Root cause**: Multiple threads appending to shared lists (`self.results` and `self.failed_urls`) without synchronization\n",
      "- **Impact**: Lost updates leading to inconsistent result counts\n",
      "- **Best fix**: Collect results in local variables within the main thread, then update instance variables once\n",
      "- **Why this works**: Only one thread (main) accesses the result lists, eliminating the race condition entirely\n",
      "\n",
      "The key insight: **Avoid shared mutable state in concurrent code**. Let worker threads return values and collect them in the coordinating thread.\n",
      "\n",
      "  🔧 Memory tool: str_replace /memories/review.md\n",
      "  ✓ Result: File /memories/review.md has been edited successfully\n",
      "\n",
      "\n",
      "============================================================\n",
      "✅ Session 1 complete!\n",
      "============================================================\n"
     ]
    }
   ],
   "source": [
    "# Import helper functions\n",
    "from memory_demo.demo_helpers import (\n",
    "    run_conversation_loop,\n",
    "    run_conversation_turn,\n",
    "    print_context_management_info,\n",
    ")\n",
    "from memory_tool import MemoryToolHandler\n",
    "\n",
    "# Initialize\n",
    "client = Anthropic()\n",
    "memory = MemoryToolHandler(base_path=\"./demo_memory\")\n",
    "\n",
    "# Clear any existing memories to start fresh\n",
    "print(\"🧹 Clearing previous memories...\")\n",
    "memory.clear_all_memory()\n",
    "print(\"✓ Memory cleared\\n\")\n",
    "\n",
    "# Load example code with a race condition bug\n",
    "with open(\"memory_demo/sample_code/web_scraper_v1.py\", \"r\") as f:\n",
    "    code_to_review = f.read()\n",
    "\n",
    "messages = [\n",
    "    {\n",
    "        \"role\": \"user\",\n",
    "        \"content\": f\"I'm reviewing a multi-threaded web scraper that sometimes returns fewer results than expected. The count is inconsistent across runs. Can you find the issue?\\n\\n```python\\n{code_to_review}\\n```\",\n",
    "    }\n",
    "]\n",
    "\n",
    "print(\"=\" * 60)\n",
    "print(\"📝 SESSION 1: Learning from a bug\")\n",
    "print(\"=\" * 60)\n",
    "\n",
    "# Run conversation loop\n",
    "response = run_conversation_loop(\n",
    "    client=client,\n",
    "    model=MODEL,\n",
    "    messages=messages,\n",
    "    memory_handler=memory,\n",
    "    system=\"You are a code reviewer.\",\n",
    "    max_tokens=2048,\n",
    "    max_turns=5,\n",
    "    verbose=True,\n",
    ")\n",
    "\n",
    "print(\"\\n\" + \"=\" * 60)\n",
    "print(\"✅ Session 1 complete!\")\n",
    "print(\"=\" * 60)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**What happened?**\n",
    "\n",
    "1. Claude checked its memory (empty on first run)\n",
    "2. Identified the bug: **race condition** - multiple threads modifying shared state (`self.results` and `self.failed_urls`) without synchronization\n",
    "3. Stored the concurrency pattern in memory for future reference\n",
    "\n",
    "Now let's see the magic - Claude applying this learned pattern in a **new conversation**:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Example 2: Cross-Conversation Learning\n",
    "\n",
    "Start a completely new conversation - memory persists!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-11-15T17:12:09.834796Z",
     "iopub.status.busy": "2025-11-15T17:12:09.834455Z",
     "iopub.status.idle": "2025-11-15T17:12:49.692649Z",
     "shell.execute_reply": "2025-11-15T17:12:49.692286Z"
    },
    "vscode": {
     "languageId": "python"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "============================================================\n",
      "🚀 SESSION 2: Applying learned pattern\n",
      "============================================================\n",
      "\n",
      "🔄 Turn 1:\n",
      "  🔧 Memory tool: view /memories\n",
      "  ✓ Result: Directory: /memories\n",
      "- review.md\n",
      "\n",
      "🔄 Turn 2:\n",
      "  🔧 Memory tool: view /memories/review.md\n",
      "  ✓ Result:    1: # Code Review: Multi-threaded Web Scraper\n",
      "   2: \n",
      "   3: ## Issue\n",
      "   4: Revi...\n",
      "\n",
      "🔄 Turn 3:\n",
      "  🔧 Memory tool: str_replace /memories/review.md\n",
      "  ✓ Result: File /memories/review.md has been edited successfully\n",
      "\n",
      "🔄 Turn 4:\n",
      "💬 Claude: ## Code Review: Async API Client\n",
      "\n",
      "### Summary\n",
      "This code has **concurrency issues** related to shared mutable state being accessed from multiple coroutines without proper synchronization. While the author correctly identifies these as bugs in comments, let me provide a comprehensive review.\n",
      "\n",
      "However, the actual race condition risk here is **somewhat overstated** because:\n",
      "- List `.append()` is atomic in CPython\n",
      "- The `+=` operation on integers is also atomic\n",
      "- Coroutines only switch at `await` points, and there are none between the operations\n",
      "\n",
      "**BUT** this is still problematic because:\n",
      "- It relies on CPython implementation details\n",
      "- It's not guaranteed by the language specification\n",
      "- The code is not portable to other Python implementations\n",
      "- Future refactoring could introduce `await` points that cause real races\n",
      "\n",
      "#### 1. **Race Condition on `self.error_count` (Line 51)**\n",
      "```python\n",
      "self.error_count += 1  # Not atomic!\n",
      "```\n",
      "\n",
      "**Problem**: Even in async code, this operation is not atomic. While asyncio is single-threaded, the `+=` operator involves:\n",
      "1. Read current value\n",
      "2. Add 1\n",
      "3. Write back\n",
      "\n",
      "An `await` in another coroutine between steps 1-3 can cause lost updates.\n",
      "\n",
      "**Impact**: Error count will be incorrect/underreported.\n",
      "\n",
      "#### 2. **Unsafe List Append on `self.responses` (Line 53)**\n",
      "```python\n",
      "self.responses.append(result)  # Not thread-safe in async context!\n",
      "```\n",
      "\n",
      "**Problem**: While `list.append()` is technically atomic in CPython due to GIL, this is:\n",
      "- An implementation detail, not guaranteed by the language spec\n",
      "- Bad practice that makes the code fragile\n",
      "- Still problematic because the list is being modified while potentially being read elsewhere\n",
      "\n",
      "**Impact**: Potential data corruption, missing responses, or incorrect counts.\n",
      "\n",
      "#### 3. **Logic Error: Errors Not Included in Results**\n",
      "```python\n",
      "if \"error\" in result:\n",
      "    self.error_count += 1  # Counted but not stored!\n",
      "else:\n",
      "    self.responses.append(result)  # Only successes stored\n",
      "```\n",
      "\n",
      "**Problem**: Failed requests increment `error_count` but are **never added to `self.responses`**. This means:\n",
      "- The return value of `fetch_all()` only contains successful responses\n",
      "- Error details are lost\n",
      "- The summary calculation is inconsistent\n",
      "\n",
      "---\n",
      "\n",
      "### ⚠️ **Design Issues**\n",
      "\n",
      "#### 4. **Stateful Class Design is Anti-Pattern for Concurrent Operations**\n",
      "The class maintains mutable state (`responses`, `error_count`) that's modified during concurrent operations. This is problematic because:\n",
      "- Makes the client non-reusable (calling `fetch_all()` twice will mix results)\n",
      "- Harder to reason about state consistency\n",
      "- Requires manual reset between calls\n",
      "\n",
      "**Better approach**: Return data directly rather than accumulating in instance variables.\n",
      "\n",
      "#### 5. **Inconsistent Error Handling**\n",
      "Errors are caught and converted to dictionaries, but then handled differently from successes. This makes the API inconsistent and loses valuable error information.\n",
      "\n",
      "---\n",
      "\n",
      "### 🟡 **Minor Issues**\n",
      "\n",
      "#### 6. **Missing Type Hints**\n",
      "The module imports `Optional` but never uses it. The return types could be more specific.\n",
      "\n",
      "#### 7. **Hardcoded Test Data in Main**\n",
      "The `* 20` multiplier and specific endpoints make testing less flexible.\n",
      "\n",
      "**Fix:** Use `asyncio.gather()` which is simpler and more idiomatic for this use case.\n",
      "\n",
      "### 5. **Missing HTTP Status Code Validation**\n",
      "**Location:** `fetch_endpoint()` method\n",
      "\n",
      "  🔧 Memory tool: str_replace /memories/review.md\n",
      "  ✓ Result: File /memories/review.md has been edited successfully\n",
      "\n",
      "🔄 Turn 5:\n",
      "💬 Claude: Here's a **corrected version**:\n",
      "\n",
      "**Fix:**\n",
      "```python\n",
      "\"\"\"\n",
      "Async API client with proper concurrency handling.\n",
      "\"\"\"\n",
      "\n",
      "import asyncio\n",
      "from typing import List, Dict, Any\n",
      "from dataclasses import dataclass\n",
      "\n",
      "import aiohttp\n",
      "\n",
      "\n",
      "@dataclass\n",
      "class FetchResult:\n",
      "    \"\"\"Result from fetching an endpoint.\"\"\"\n",
      "    endpoint: str\n",
      "    status: int = 0\n",
      "    data: Any = None\n",
      "    error: str = None\n",
      "    \n",
      "    @property\n",
      "    def is_success(self) -> bool:\n",
      "        return self.error is None\n",
      "\n",
      "\n",
      "class AsyncAPIClient:\n",
      "    \"\"\"Async API client for fetching data from multiple endpoints.\"\"\"\n",
      "\n",
      "    def __init__(self, base_url: str):\n",
      "        self.base_url = base_url\n",
      "\n",
      "    async def fetch_endpoint(\n",
      "        self, session: aiohttp.ClientSession, endpoint: str\n",
      "    ) -> FetchResult:\n",
      "        \"\"\"Fetch a single endpoint.\"\"\"\n",
      "        url = f\"{self.base_url}/{endpoint}\"\n",
      "        try:\n",
      "            async with session.get(\n",
      "                url, timeout=aiohttp.ClientTimeout(total=5)\n",
      "            ) as response:\n",
      "                data = await response.json()\n",
      "                return FetchResult(\n",
      "                    endpoint=endpoint,\n",
      "                    status=response.status,\n",
      "                    data=data,\n",
      "                )\n",
      "        except Exception as e:\n",
      "            return FetchResult(\n",
      "                endpoint=endpoint,\n",
      "                error=str(e),\n",
      "            )\n",
      "\n",
      "    async def fetch_all(self, endpoints: List[str]) -> List[FetchResult]:\n",
      "        \"\"\"\n",
      "        Fetch multiple endpoints concurrently.\n",
      "        \n",
      "        Fixed: Returns all results directly without modifying shared state.\n",
      "        Uses asyncio.gather() which is cleaner than as_completed() for this use case.\n",
      "        \"\"\"\n",
      "        async with aiohttp.ClientSession() as session:\n",
      "            tasks = [self.fetch_endpoint(session, endpoint) for endpoint in endpoints]\n",
      "            # gather() returns results in order and handles all concurrency\n",
      "            results = await asyncio.gather(*tasks)\n",
      "            return results\n",
      "\n",
      "    @staticmethod\n",
      "    def get_summary(results: List[FetchResult]) -> Dict[str, Any]:\n",
      "        \"\"\"Get summary statistics from results.\"\"\"\n",
      "        successes = sum(1 for r in results if r.is_success)\n",
      "        errors = len(results) - successes\n",
      "        \n",
      "        return {\n",
      "            \"total_responses\": len(results),\n",
      "            \"successful\": successes,\n",
      "            \"errors\": errors,\n",
      "            \"success_rate\": successes / len(results) if results else 0,\n",
      "        }\n",
      "\n",
      "\n",
      "async def main():\n",
      "    \"\"\"Test the async API client.\"\"\"\n",
      "    client = AsyncAPIClient(\"https://jsonplaceholder.typicode.com\")\n",
      "\n",
      "    endpoints = [\n",
      "        \"posts/1\",\n",
      "        \"posts/2\",\n",
      "        \"posts/3\",\n",
      "        \"users/1\",\n",
      "        \"users/2\",\n",
      "        \"invalid/endpoint\",  # Will error\n",
      "    ] * 20  # 120 requests total\n",
      "\n",
      "    results = await client.fetch_all(endpoints)\n",
      "\n",
      "    # Separate successes and errors for clarity\n",
      "    successes = [r for r in results if r.is_success]\n",
      "    errors = [r for r in results if not r.is_success]\n",
      "\n",
      "    print(f\"Expected: 120 total responses (100 successful, 20 errors)\")\n",
      "    print(f\"Got: {len(results)} responses ({len(successes)} successful, {len(errors)} errors)\")\n",
      "    print(f\"Summary: {AsyncAPIClient.get_summary(results)}\")\n",
      "    \n",
      "    if errors:\n",
      "        print(f\"\\nFirst error: {errors[0].endpoint} - {errors[0].error}\")\n",
      "\n",
      "\n",
      "if __name__ == \"__main__\":\n",
      "    asyncio.run(main())\n",
      "```\n",
      "\n",
      "---\n",
      "\n",
      "### **Key Improvements in Fixed Version:**\n",
      "\n",
      "1. ✅ **No shared mutable state** - Results returned directly\n",
      "2. ✅ **All results captured** - Both successes and errors included\n",
      "3. ✅ **Type-safe with dataclass** - Clear structure for results\n",
      "4. ✅ **Reusable client** - Can call `fetch_all()` multiple times safely\n",
      "5. ✅ **Consistent error handling** - All results have same structure\n",
      "6. ✅ **Simpler concurrency** - Uses `asyncio.gather()` instead of `as_completed()`\n",
      "7. ✅ **Static summary method** - Takes results as parameter, no state dependency\n",
      "\n",
      "---\n",
      "\n",
      "### **Alternative: If You Need Shared State**\n",
      "\n",
      "If you absolutely need to accumulate results in the instance (e.g., for streaming/progressive updates), use an `asyncio.Lock`:\n",
      "\n",
      "```python\n",
      "import asyncio\n",
      "\n",
      "class AsyncAPIClient:\n",
      "    def __init__(self, base_url: str):\n",
      "        self.base_url = base_url\n",
      "        self.responses = []\n",
      "        self.error_count = 0\n",
      "        self._lock = asyncio.Lock()  # Add lock\n",
      "\n",
      "    async def fetch_all(self, endpoints: List[str]) -> List[Dict[str, Any]]:\n",
      "        async with aiohttp.ClientSession() as session:\n",
      "            tasks = [self.fetch_endpoint(session, endpoint) for endpoint in endpoints]\n",
      "            \n",
      "            for coro in asyncio.as_completed(tasks):\n",
      "                result = await coro\n",
      "                \n",
      "                # Protect shared state with lock\n",
      "                async with self._lock:\n",
      "                    if \"error\" in result:\n",
      "                        self.error_count += 1\n",
      "                    else:\n",
      "                        self.responses.append(result)\n",
      "        \n",
      "        return self.responses\n",
      "```\n",
      "\n",
      "However, the stateless design I showed first is **strongly preferred** for async code.\n",
      "\n",
      "import asyncio\n",
      "from typing import List, Dict, Any, TypedDict\n",
      "\n",
      "### **Testing Recommendation**\n",
      "\n",
      "Add tests that verify all 120 requests are accounted for:\n",
      "\n",
      "```python\n",
      "async def test_all_results_captured():\n",
      "    client = AsyncAPIClient(\"https://jsonplaceholder.typicode.com\")\n",
      "    endpoints = [\"posts/1\"] * 100\n",
      "    results = await client.fetch_all(endpoints)\n",
      "    assert len(results) == 100, f\"Expected 100, got {len(results)}\"\n",
      "```\n",
      "\n",
      "\n",
      "============================================================\n",
      "✅ Session 2 complete!\n",
      "============================================================\n"
     ]
    }
   ],
   "source": [
    "# NEW conversation (empty messages)\n",
    "# Load API client code with similar concurrency issue\n",
    "with open(\"memory_demo/sample_code/api_client_v1.py\", \"r\") as f:\n",
    "    code_to_review = f.read()\n",
    "\n",
    "messages = [\n",
    "    {\"role\": \"user\", \"content\": f\"Review this API client code:\\n\\n```python\\n{code_to_review}\\n```\"}\n",
    "]\n",
    "\n",
    "print(\"=\" * 60)\n",
    "print(\"🚀 SESSION 2: Applying learned pattern\")\n",
    "print(\"=\" * 60)\n",
    "\n",
    "# Run conversation loop\n",
    "response = run_conversation_loop(\n",
    "    client=client,\n",
    "    model=MODEL,\n",
    "    messages=messages,\n",
    "    memory_handler=memory,\n",
    "    system=\"You are a code reviewer.\",\n",
    "    max_tokens=2048,\n",
    "    max_turns=5,\n",
    "    verbose=True,\n",
    ")\n",
    "\n",
    "print(\"\\n\" + \"=\" * 60)\n",
    "print(\"✅ Session 2 complete!\")\n",
    "print(\"=\" * 60)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Notice the difference:**\n",
    "\n",
    "- Claude **immediately checked memory** and found the thread-safety/concurrency pattern\n",
    "- Recognized the similar issue in async code **instantly** without re-learning\n",
    "- Response was **faster** because it applied stored knowledge about shared mutable state\n",
    "\n",
    "This is **cross-conversation learning** in action!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Example 3: Context Clearing While Preserving Memory\n",
    "\n",
    "What happens during a **long review session** with many code files?\n",
    "\n",
    "- Context fills up with tool results from previous reviews\n",
    "- But memory (learned patterns) must persist!\n",
    "\n",
    "Let's trigger **context editing** to see how Claude manages this automatically.\n",
    "\n",
    "**Note on configuration:** We use `clear_at_least: 50` tokens because memory tool operations have small results (~50-150 tokens each). In production with larger tool results (like web search or code execution), you'd use higher values like 3000-5000 tokens."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "============================================================\n",
      "📚 SESSION 3: Long review session with context clearing\n",
      "============================================================\n",
      "\n",
      "📝 Review 1: Data processor\n",
      "🧠 Thinking: The user wants me to review a data processor with multiple concurrency and thread-safety issues. Let...\n",
      "  🔧 Memory tool: str_replace /memories/review.md\n",
      "  ✓ Result: File /memories/review.md has been edited successfully\n",
      "  📊 Input tokens: 6,611\n",
      "  ℹ️  Context below threshold - no clearing triggered\n",
      "\n",
      "📝 Review 2: SQL query builder\n",
      "🧠 Thinking: The user is now asking me to review a SQL query builder with SQL injection vulnerabilities. This is ...\n",
      "  🔧 Memory tool: str_replace /memories/review.md\n",
      "  ✓ Result: File /memories/review.md has been edited successfully\n",
      "  📊 Input tokens: 7,923\n",
      "  ✂️  Context editing triggered!\n",
      "      • Cleared 1 thinking turn(s), saved 166 tokens\n",
      "      • After clearing: 7,923 tokens\n",
      "\n",
      "📝 Review 3: Web scraper (should trigger clearing)\n",
      "🧠 Thinking: This is a quick check request for a web scraper with threading issues. Let me quickly identify the p...\n",
      "  🔧 Memory tool: str_replace /memories/review.md\n",
      "  ✓ Result: File /memories/review.md has been edited successfully\n",
      "  📊 Input tokens: 9,052\n",
      "  ✂️  Context editing triggered!\n",
      "      • Cleared 2 thinking turn(s), saved 265 tokens\n",
      "      • After clearing: 9,052 tokens\n",
      "\n",
      "============================================================\n",
      "✅ Session 3 complete!\n",
      "============================================================\n"
     ]
    }
   ],
   "source": [
    "# Configure context management with BOTH clearing strategies\n",
    "# Low thresholds for demo - in production use 30-40k tokens\n",
    "CONTEXT_MANAGEMENT = {\n",
    "    \"edits\": [\n",
    "        # Thinking management MUST come first when combining strategies\n",
    "        {\n",
    "            \"type\": \"clear_thinking_20251015\",\n",
    "            \"keep\": {\"type\": \"thinking_turns\", \"value\": 1}  # Keep only last turn's thinking\n",
    "        },\n",
    "        {\n",
    "            \"type\": \"clear_tool_uses_20250919\",\n",
    "            \"trigger\": {\"type\": \"input_tokens\", \"value\": 5000},  # Low threshold for demo\n",
    "            \"keep\": {\"type\": \"tool_uses\", \"value\": 2},  # Keep last 2 tool uses\n",
    "            \"clear_at_least\": {\"type\": \"input_tokens\", \"value\": 2000}\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\n",
    "# Extended thinking config (required for clear_thinking strategy)\n",
    "THINKING = {\n",
    "    \"type\": \"enabled\",\n",
    "    \"budget_tokens\": 1024  # Budget for thinking per turn\n",
    "}\n",
    "\n",
    "# Continue from previous session - memory persists!\n",
    "print(\"=\" * 60)\n",
    "print(\"📚 SESSION 3: Long review session with context clearing\")\n",
    "print(\"=\" * 60)\n",
    "print()\n",
    "\n",
    "# Clean up messages - remove any empty content from previous session\n",
    "# This ensures we have a valid message state to continue from\n",
    "cleaned_messages = []\n",
    "for msg in messages:\n",
    "    if isinstance(msg.get(\"content\"), list):\n",
    "        # Filter out empty content blocks\n",
    "        content = [c for c in msg[\"content\"] if c]\n",
    "        if content:\n",
    "            cleaned_messages.append({\"role\": msg[\"role\"], \"content\": content})\n",
    "    elif msg.get(\"content\"):\n",
    "        cleaned_messages.append(msg)\n",
    "\n",
    "messages = cleaned_messages\n",
    "\n",
    "# Review 1: Data processor (larger file)\n",
    "with open(\"memory_demo/sample_code/data_processor_v1.py\", \"r\") as f:\n",
    "    data_processor_code = f.read()\n",
    "\n",
    "messages.append({\n",
    "    \"role\": \"user\",\n",
    "    \"content\": f\"Review this data processor:\\n\\n```python\\n{data_processor_code}\\n```\"\n",
    "})\n",
    "\n",
    "print(\"📝 Review 1: Data processor\")\n",
    "response = run_conversation_turn(\n",
    "    client=client,\n",
    "    model=MODEL,\n",
    "    messages=messages,\n",
    "    memory_handler=memory,\n",
    "    system=\"You are a code reviewer.\",\n",
    "    context_management=CONTEXT_MANAGEMENT,\n",
    "    thinking=THINKING,\n",
    "    max_tokens=4096,\n",
    "    verbose=True\n",
    ")\n",
    "\n",
    "# Add response to messages\n",
    "messages.append({\"role\": \"assistant\", \"content\": response[1]})\n",
    "if response[2]:\n",
    "    messages.append({\"role\": \"user\", \"content\": response[2]})\n",
    "\n",
    "print(f\"  📊 Input tokens: {response[0].usage.input_tokens:,}\")\n",
    "context_cleared, saved = print_context_management_info(response[0])\n",
    "print()\n",
    "\n",
    "# Review 2: SQL code\n",
    "with open(\"memory_demo/sample_code/sql_query_builder.py\", \"r\") as f:\n",
    "    sql_code = f.read()\n",
    "\n",
    "messages.append({\n",
    "    \"role\": \"user\",\n",
    "    \"content\": f\"Review this SQL query builder:\\n\\n```python\\n{sql_code}\\n```\"\n",
    "})\n",
    "\n",
    "print(\"📝 Review 2: SQL query builder\")\n",
    "response = run_conversation_turn(\n",
    "    client=client,\n",
    "    model=MODEL,\n",
    "    messages=messages,\n",
    "    memory_handler=memory,\n",
    "    system=\"You are a code reviewer.\",\n",
    "    context_management=CONTEXT_MANAGEMENT,\n",
    "    thinking=THINKING,\n",
    "    max_tokens=4096,\n",
    "    verbose=True\n",
    ")\n",
    "\n",
    "messages.append({\"role\": \"assistant\", \"content\": response[1]})\n",
    "if response[2]:\n",
    "    messages.append({\"role\": \"user\", \"content\": response[2]})\n",
    "\n",
    "print(f\"  📊 Input tokens: {response[0].usage.input_tokens:,}\")\n",
    "context_cleared, saved = print_context_management_info(response[0])\n",
    "print()\n",
    "\n",
    "# Review 3: Add one more review to ensure we trigger clearing\n",
    "with open(\"memory_demo/sample_code/web_scraper_v1.py\", \"r\") as f:\n",
    "    scraper_code = f.read()\n",
    "\n",
    "messages.append({\n",
    "    \"role\": \"user\",\n",
    "    \"content\": f\"Quick check - any issues here?\\n\\n```python\\n{scraper_code}\\n```\"\n",
    "})\n",
    "\n",
    "print(\"📝 Review 3: Web scraper (should trigger clearing)\")\n",
    "response = run_conversation_turn(\n",
    "    client=client,\n",
    "    model=MODEL,\n",
    "    messages=messages,\n",
    "    memory_handler=memory,\n",
    "    system=\"You are a code reviewer.\",\n",
    "    context_management=CONTEXT_MANAGEMENT,\n",
    "    thinking=THINKING,\n",
    "    max_tokens=4096,\n",
    "    verbose=True\n",
    ")\n",
    "\n",
    "messages.append({\"role\": \"assistant\", \"content\": response[1]})\n",
    "if response[2]:\n",
    "    messages.append({\"role\": \"user\", \"content\": response[2]})\n",
    "\n",
    "print(f\"  📊 Input tokens: {response[0].usage.input_tokens:,}\")\n",
    "context_cleared, saved = print_context_management_info(response[0])\n",
    "\n",
    "print()\n",
    "print(\"=\" * 60)\n",
    "print(\"✅ Session 3 complete!\")\n",
    "print(\"=\" * 60)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**What just happened?**\n",
    "\n",
    "As context grew during multiple reviews with extended thinking enabled, context editing was applied:\n",
    "1. **Thinking blocks cleared** - Old thinking from previous turns removed first\n",
    "2. **Tool results cleared** - Old memory tool results removed when threshold exceeded\n",
    "3. **Memory files intact** - Claude can still query learned patterns\n",
    "4. **Token usage managed** - Saved tokens from both thinking and tool results\n",
    "\n",
    "This demonstrates the key benefit:\n",
    "- **Short-term memory** (conversation context + thinking) → Cleared to save space\n",
    "- **Long-term memory** (stored patterns) → Persists across sessions\n",
    "\n",
    "Let's verify memory survived the clearing:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "📂 Memory files in demo_memory/:\n",
      "\n",
      "demo_memory/\n",
      "  memories/\n",
      "    ├── review.md (318 bytes)\n",
      "\n",
      "✅ All learned patterns preserved despite context clearing!\n"
     ]
    }
   ],
   "source": [
    "# Verify memory persists after context clearing\n",
    "import os\n",
    "\n",
    "print(\"📂 Memory files in demo_memory/:\")\n",
    "print()\n",
    "\n",
    "for root, dirs, files in os.walk(\"./demo_memory\"):\n",
    "    # Calculate relative path for display\n",
    "    level = root.replace(\"./demo_memory\", \"\").count(os.sep)\n",
    "    indent = \"  \" * level\n",
    "    folder_name = os.path.basename(root) or \"demo_memory\"\n",
    "    print(f\"{indent}{folder_name}/\")\n",
    "\n",
    "    sub_indent = \"  \" * (level + 1)\n",
    "    for file in files:\n",
    "        file_path = os.path.join(root, file)\n",
    "        size = os.path.getsize(file_path)\n",
    "        print(f\"{sub_indent}├── {file} ({size} bytes)\")\n",
    "\n",
    "print()\n",
    "print(\"✅ All learned patterns preserved despite context clearing!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. How It Works {#how-it-works}\n",
    "\n",
    "### Memory Tool Architecture\n",
    "\n",
    "The memory tool is **client-side** - you control the storage. Claude makes tool calls, your application executes them.\n",
    "\n",
    "#### Memory Tool Commands\n",
    "\n",
    "| Command | Description | Example |\n",
    "|---------|-------------|---------|\n",
    "| `view` | Show directory or file contents | `{\"command\": \"view\", \"path\": \"/memories\"}` |\n",
    "| `create` | Create or overwrite a file | `{\"command\": \"create\", \"path\": \"/memories/notes.md\", \"file_text\": \"...\"}` |\n",
    "| `str_replace` | Replace text in a file | `{\"command\": \"str_replace\", \"path\": \"...\", \"old_str\": \"...\", \"new_str\": \"...\"}` |\n",
    "| `insert` | Insert text at line number | `{\"command\": \"insert\", \"path\": \"...\", \"insert_line\": 2, \"insert_text\": \"...\"}` |\n",
    "| `delete` | Delete a file or directory | `{\"command\": \"delete\", \"path\": \"/memories/old.txt\"}` |\n",
    "| `rename` | Rename or move a file | `{\"command\": \"rename\", \"old_path\": \"...\", \"new_path\": \"...\"}` |\n",
    "\n",
    "See `memory_tool.py` for the complete implementation with path validation and security measures."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Thinking Management (`clear_thinking_20251015`)\n",
    "\n",
    "When using extended thinking, thinking blocks accumulate and consume tokens. The `clear_thinking` strategy manages these automatically.\n",
    "\n",
    "**Important**: This strategy requires `thinking` to be enabled in your API call.\n",
    "\n",
    "**API Call Pattern** (with extended thinking enabled):\n",
    "\n",
    "```python\n",
    "response = client.beta.messages.create(\n",
    "    betas=[\"context-management-2025-06-27\"],  # Required beta flag\n",
    "    model=\"claude-sonnet-4-5\",\n",
    "    messages=messages,\n",
    "    tools=[{\"type\": \"memory_20250818\", \"name\": \"memory\"}],\n",
    "    thinking={\"type\": \"enabled\", \"budget_tokens\": 10000},  # Enable thinking\n",
    "    context_management={  # Context editing config\n",
    "        \"edits\": [\n",
    "            {\n",
    "                \"type\": \"clear_thinking_20251015\",\n",
    "                \"keep\": {\"type\": \"thinking_turns\", \"value\": 1}  # Keep last turn only\n",
    "            },\n",
    "            {\n",
    "                \"type\": \"clear_tool_uses_20250919\",\n",
    "                \"trigger\": {\"type\": \"input_tokens\", \"value\": 35000},\n",
    "                \"keep\": {\"type\": \"tool_uses\", \"value\": 5}\n",
    "            }\n",
    "        ]\n",
    "    },\n",
    "    max_tokens=2048\n",
    ")\n",
    "```\n",
    "\n",
    "**Key points:**\n",
    "- `clear_thinking` must come **first** when combining strategies\n",
    "- Requires extended thinking to be enabled (`thinking={\"type\": \"enabled\", ...}`)\n",
    "- Use `\"keep\": \"all\"` to preserve all thinking blocks for maximum cache hits\n",
    "- Trigger is optional for thinking (clears based on `keep` value)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Understanding the Demo Code\n",
    "\n",
    "Key implementation details from `code_review_demo.py`:\n",
    "\n",
    "```python\n",
    "class CodeReviewAssistant:\n",
    "    def __init__(self, memory_storage_path=\"./memory_storage\"):\n",
    "        self.client = Anthropic()\n",
    "        self.memory_handler = MemoryToolHandler(base_path=memory_storage_path)\n",
    "        self.messages = []\n",
    "    \n",
    "    def review_code(self, code, filename, description=\"\"):\n",
    "        # 1. Add user message\n",
    "        self.messages.append({...})\n",
    "        \n",
    "        # 2. Conversation loop with tool execution\n",
    "        while True:\n",
    "            response = self.client.beta.messages.create(\n",
    "                model=MODEL,\n",
    "                system=self._create_system_prompt(),\n",
    "                messages=self.messages,\n",
    "                tools=[{\"type\": \"memory_20250818\", \"name\": \"memory\"}],\n",
    "                betas=[\"context-management-2025-06-27\"],\n",
    "                context_management=CONTEXT_MANAGEMENT\n",
    "            )\n",
    "            \n",
    "            # 3. Execute tool uses\n",
    "            tool_results = []\n",
    "            for content in response.content:\n",
    "                if content.type == \"tool_use\":\n",
    "                    result = self._execute_tool_use(content)\n",
    "                    tool_results.append({...})\n",
    "            \n",
    "            # 4. Continue if there are tool uses, otherwise done\n",
    "            if tool_results:\n",
    "                self.messages.append({\"role\": \"user\", \"content\": tool_results})\n",
    "            else:\n",
    "                break\n",
    "```\n",
    "\n",
    "**The key pattern**: Keep calling the API while there are tool uses, executing them and feeding results back."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What Claude Actually Learns\n",
    "\n",
    "This is what makes memory powerful - **semantic pattern recognition**, not just syntax:\n",
    "\n",
    "**Session 1: Thread-Based Web Scraper**\n",
    "\n",
    "```python\n",
    "# Bug: Race condition\n",
    "class WebScraper:\n",
    "    def __init__(self):\n",
    "        self.results = []  # Shared state!\n",
    "    \n",
    "    def scrape_urls(self, urls):\n",
    "        with ThreadPoolExecutor() as executor:\n",
    "            for future in as_completed(futures):\n",
    "                self.results.append(future.result())  # RACE!\n",
    "```\n",
    "\n",
    "**What Claude Stores in Memory** (example file: `/memories/concurrency_patterns/thread_safety.md`):\n",
    "\n",
    "When Claude encounters this pattern, it stores the following insights to its memory files:\n",
    "- **Symptom**: Inconsistent results in concurrent operations\n",
    "- **Cause**: Shared mutable state (lists/dicts) modified from multiple threads\n",
    "- **Solution**: Use locks, thread-safe data structures, or return results instead\n",
    "- **Red flags**: Instance variables in thread callbacks, unused locks, counter increments\n",
    "\n",
    "---\n",
    "\n",
    "**Session 2: Async API Client** (New conversation!)\n",
    "\n",
    "Claude checks memory FIRST, finds the thread-safety pattern, then:\n",
    "1. **Recognizes** similar pattern in async code (coroutines can interleave too)\n",
    "2. **Applies** the solution immediately (no re-learning needed)\n",
    "3. **Explains** with reference to stored knowledge\n",
    "\n",
    "```python\n",
    "# Claude spots this immediately:\n",
    "async def fetch_all(self, endpoints):\n",
    "    for coro in asyncio.as_completed(tasks):\n",
    "        self.responses.append(await coro)  # Same pattern!\n",
    "```\n",
    "\n",
    "---\n",
    "\n",
    "**Why This Matters:**\n",
    "\n",
    "- ❌ **Syntax checkers** miss race conditions entirely\n",
    "- ✅ **Claude learns** architectural patterns and applies them across contexts\n",
    "- ✅ **Cross-language**: Pattern applies to Go, Java, Rust concurrency too\n",
    "- ✅ **Gets better**: Each review adds to the knowledge base"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Sample Code Files\n",
    "\n",
    "The demo uses these sample files (all have concurrency/thread-safety bugs):\n",
    "\n",
    "- `memory_demo/sample_code/web_scraper_v1.py` - Race condition: threads modifying shared state\n",
    "- `memory_demo/sample_code/api_client_v1.py` - Similar concurrency bug in async context\n",
    "- `memory_demo/sample_code/data_processor_v1.py` - Multiple concurrency issues for long session demo\n",
    "\n",
    "Let's look at one:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**`memory_demo/sample_code/web_scraper_v1.py`**\n",
    "\n",
    "```python\n",
    "\"\"\"\n",
    "Concurrent web scraper with a race condition bug.\n",
    "Multiple threads modify shared state without synchronization.\n",
    "\"\"\"\n",
    "\n",
    "import time\n",
    "from concurrent.futures import ThreadPoolExecutor, as_completed\n",
    "from typing import List, Dict\n",
    "\n",
    "import requests\n",
    "\n",
    "\n",
    "class WebScraper:\n",
    "    \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n",
    "\n",
    "    def __init__(self, max_workers: int = 10):\n",
    "        self.max_workers = max_workers\n",
    "        self.results = []  # BUG: Shared mutable state accessed by multiple threads!\n",
    "        self.failed_urls = []  # BUG: Another race condition!\n",
    "\n",
    "    def fetch_url(self, url: str) -> Dict[str, any]:\n",
    "        \"\"\"Fetch a single URL and return the result.\"\"\"\n",
    "        try:\n",
    "            response = requests.get(url, timeout=5)\n",
    "            response.raise_for_status()\n",
    "            return {\n",
    "                \"url\": url,\n",
    "                \"status\": response.status_code,\n",
    "                \"content_length\": len(response.content),\n",
    "            }\n",
    "        except requests.exceptions.RequestException as e:\n",
    "            return {\"url\": url, \"error\": str(e)}\n",
    "\n",
    "    def scrape_urls(self, urls: List[str]) -> List[Dict[str, any]]:\n",
    "        \"\"\"\n",
    "        Scrape multiple URLs concurrently.\n",
    "\n",
    "        BUG: self.results is accessed from multiple threads without locking!\n",
    "        This causes race conditions where results can be lost or corrupted.\n",
    "        \"\"\"\n",
    "        with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n",
    "            futures = [executor.submit(self.fetch_url, url) for url in urls]\n",
    "\n",
    "            for future in as_completed(futures):\n",
    "                result = future.result()\n",
    "\n",
    "                # RACE CONDITION: Multiple threads append to self.results simultaneously\n",
    "                if \"error\" in result:\n",
    "                    self.failed_urls.append(result[\"url\"])  # RACE CONDITION\n",
    "                else:\n",
    "                    self.results.append(result)  # RACE CONDITION\n",
    "\n",
    "        return self.results\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Bug**: Multiple threads modify `self.results` and `self.failed_urls` without locking!\n",
    "\n",
    "Claude will:\n",
    "1. Identify the race conditions\n",
    "2. Store the pattern in `/memories/concurrency_patterns/thread_safety.md`\n",
    "3. Apply this concurrency pattern to async code in Session 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Demo Overview\n",
    "\n",
    "We've built a complete Code Review Assistant. The implementation is in `memory_demo/code_review_demo.py`.\n",
    "\n",
    "**To run the interactive demo:**\n",
    "```bash\n",
    "python memory_demo/code_review_demo.py\n",
    "```\n",
    "\n",
    "The demo demonstrates:\n",
    "1. **Session 1**: Review Python code with a bug → Claude learns the pattern\n",
    "2. **Session 2**: Review similar code (new conversation) → Claude applies the pattern\n",
    "3. **Session 3**: Long review session → Context editing keeps it manageable"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7. Best Practices & Security {#best-practices}\n",
    "\n",
    "### Memory Management\n",
    "\n",
    "**Do:**\n",
    "- ✅ Store task-relevant patterns, not conversation history\n",
    "- ✅ Organize with clear directory structure\n",
    "- ✅ Use descriptive file names\n",
    "- ✅ Periodically review and clean up memory\n",
    "\n",
    "**Don't:**\n",
    "- ❌ Store sensitive information (passwords, API keys, PII)\n",
    "- ❌ Let memory grow unbounded\n",
    "- ❌ Store everything indiscriminately\n",
    "\n",
    "### Security: Path Traversal Protection\n",
    "\n",
    "**Critical**: Always validate paths to prevent directory traversal attacks. See `memory_tool.py` for implementation.\n",
    "\n",
    "### Security: Memory Poisoning\n",
    "\n",
    "**⚠️ Critical Risk**: Memory files are read back into Claude's context, making them a potential vector for prompt injection.\n",
    "\n",
    "**Mitigation strategies:**\n",
    "1. **Content Sanitization**: Filter dangerous patterns before storing\n",
    "2. **Memory Scope Isolation**: Per-user/per-project isolation  \n",
    "3. **Memory Auditing**: Log and scan all memory operations\n",
    "4. **Prompt Engineering**: Instruct Claude to ignore instructions in memory\n",
    "\n",
    "See `memory_tool.py` for complete security implementation and tests in `tests/`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Conclusion\n",
    "\n",
    "### What You Accomplished\n",
    "\n",
    "In this cookbook, you learned to:\n",
    "- ✅ **Implement the memory tool** for cross-conversation learning (Sessions 1 & 2 showed pattern recognition persisting)\n",
    "- ✅ **Configure context editing** with token triggers and retention policies (Session 3 demonstrated automatic clearing)\n",
    "- ✅ **Apply security best practices** including path validation and memory poisoning prevention\n",
    "\n",
    "### Applying These Patterns\n",
    "\n",
    "**For your projects:**\n",
    "1. Start with a single memory file for patterns (e.g., `/memories/patterns.md`)\n",
    "2. Set context editing triggers at 30-40k tokens for production use\n",
    "3. Implement per-project memory isolation to prevent cross-contamination\n",
    "\n",
    "**Other applications:**\n",
    "- **Customer support**: Store user preferences and common issue resolutions\n",
    "- **Research assistants**: Accumulate domain knowledge across sessions\n",
    "- **Data analysis**: Remember dataset characteristics and successful techniques\n",
    "\n",
    "### Next Steps\n",
    "\n",
    "- **Production deployment**: Use [claude-code-action](https://github.com/anthropics/claude-code-action) for GitHub PR reviews\n",
    "- **Security hardening**: Review the memory poisoning mitigations in `memory_tool.py`\n",
    "- **Extended thinking**: Explore thinking management for compute-intensive tasks\n",
    "\n",
    "### Resources\n",
    "\n",
    "- [Memory tool documentation](https://docs.claude.com/en/docs/agents-and-tools/tool-use/memory-tool)\n",
    "- [Claude API reference](https://docs.claude.com/en/api/messages)\n",
    "- [Support](https://support.claude.com)\n",
    "\n",
    "Memory and context management are in **beta**. Share your feedback to help us improve!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Coconut",
   "language": "coconut",
   "name": "coconut"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "python",
    "version": 3
   },
   "file_extension": ".coco",
   "mimetype": "text/x-python3",
   "name": "coconut",
   "pygments_lexer": "coconut",
   "version": "3.0.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}