{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 🚀 Code Translator from Python to C++\n",
        "\n",
        "**Multi-LLM Python to C++ Code Translator with Compilation Testing and Quality Analysis**\n",
        "\n",
        "This notebook demonstrates a comprehensive AI-powered code translation system that:\n",
        "- Translates Python code to C++ using multiple LLM models (GPT-4o, Claude 3.5 Sonnet, Gemini 2.0 Flash)\n",
        "- Automatically compiles and tests generated C++ code\n",
        "- Performs quality analysis and performance benchmarking\n",
        "- Compares translation results across different AI models\n",
        "\n",
        "## 🎯 Key Features\n",
        "\n",
        "- **Multi-LLM Support**: Compare translations from OpenAI, Anthropic, and Google\n",
        "- **C++ Compilation**: Automatic compilation and execution testing\n",
        "- **Quality Analysis**: Code quality metrics and performance benchmarking\n",
        "- **Interactive Interface**: Easy-to-use notebook interface\n",
        "- **Comprehensive Testing**: Full test suite for validation\n",
        "\n",
        "## 📋 Table of Contents\n",
        "\n",
        "1. [Setup and Installation](#setup)\n",
        "2. [LLM Client Implementation](#llm-clients)\n",
        "3. [C++ Compiler and Testing](#compiler)\n",
        "4. [Core Translation Logic](#translator)\n",
        "5. [Quality Analysis](#quality)\n",
        "6. [Interactive Examples](#examples)\n",
        "7. [Performance Benchmarking](#benchmarking)\n",
        "8. [Testing and Validation](#testing)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. Setup and Installation\n",
        "\n",
        "First, let's install the required dependencies and set up the environment.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Install required packages\n",
        "!uv add openai anthropic google-generativeai gradio python-dotenv pydantic requests psutil memory-profiler pytest black flake8 mypy\n",
        "#For those working with pip, you can use the following command:\n",
        "#!pip install openai anthropic google-generativeai gradio python-dotenv pydantic requests psutil memory-profiler pytest black flake8 mypy\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import required libraries\n",
        "import os\n",
        "import sys\n",
        "import json\n",
        "import time\n",
        "import subprocess\n",
        "import tempfile\n",
        "import psutil\n",
        "import re\n",
        "from typing import Dict, List, Optional, Tuple, Any, Union\n",
        "from dataclasses import dataclass, asdict\n",
        "from pathlib import Path\n",
        "\n",
        "# LLM libraries\n",
        "import openai\n",
        "import anthropic\n",
        "import google.generativeai as genai\n",
        "from dotenv import load_dotenv\n",
        "\n",
        "# Load environment variables\n",
        "load_dotenv()\n",
        "\n",
        "print(\"✅ All libraries imported successfully!\")\n",
        "print(f\"Python version: {sys.version}\")\n",
        "print(f\"Working directory: {os.getcwd()}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. LLM Client Implementation\n",
        "\n",
        "Let's implement the LLM clients for OpenAI GPT, Anthropic Claude, and Google Gemini.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Data classes for translation results\n",
        "@dataclass\n",
        "class TranslationResult:\n",
        "    \"\"\"Result of a code translation.\"\"\"\n",
        "    source_code: str\n",
        "    translated_code: str\n",
        "    model_name: str\n",
        "    success: bool\n",
        "    error_message: Optional[str] = None\n",
        "    translation_time: float = 0.0\n",
        "    token_usage: Optional[Dict] = None\n",
        "\n",
        "@dataclass\n",
        "class CompilationResult:\n",
        "    \"\"\"Result of C++ compilation.\"\"\"\n",
        "    success: bool\n",
        "    executable_path: Optional[str] = None\n",
        "    error_message: Optional[str] = None\n",
        "    compilation_time: float = 0.0\n",
        "    warnings: List[str] = None\n",
        "\n",
        "@dataclass\n",
        "class ExecutionResult:\n",
        "    \"\"\"Result of C++ code execution.\"\"\"\n",
        "    success: bool\n",
        "    output: str = \"\"\n",
        "    error_message: Optional[str] = None\n",
        "    execution_time: float = 0.0\n",
        "    memory_usage: float = 0.0\n",
        "    exit_code: int = 0\n",
        "\n",
        "@dataclass\n",
        "class PerformanceMetrics:\n",
        "    \"\"\"Performance metrics for C++ code.\"\"\"\n",
        "    execution_time: float\n",
        "    memory_usage: float\n",
        "    cpu_usage: float\n",
        "    code_size: int\n",
        "    compilation_time: float\n",
        "\n",
        "print(\"✅ Data classes defined successfully!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# OpenAI GPT Client\n",
        "class OpenAIClient:\n",
        "    \"\"\"OpenAI GPT client for code translation.\"\"\"\n",
        "    \n",
        "    def __init__(self, api_key: str):\n",
        "        self.api_key = api_key\n",
        "        self.client = openai.OpenAI(api_key=api_key)\n",
        "    \n",
        "    def translate_python_to_cpp(self, python_code: str, context: str = \"\") -> TranslationResult:\n",
        "        \"\"\"Translate Python code to C++ using GPT-4o.\"\"\"\n",
        "        start_time = time.time()\n",
        "        \n",
        "        try:\n",
        "            system_prompt = \"\"\"You are an expert Python to C++ translator. \n",
        "            Convert the given Python code to efficient, modern C++ code.\n",
        "            \n",
        "            Requirements:\n",
        "            - Use modern C++17/20 features\n",
        "            - Include proper headers\n",
        "            - Add comprehensive error handling\n",
        "            - Optimize for performance\n",
        "            - Include detailed comments\n",
        "            - Follow C++ best practices\n",
        "            \n",
        "            Return ONLY the C++ code, no explanations.\"\"\"\n",
        "            \n",
        "            user_prompt = f\"\"\"Translate this Python code to C++:\n",
        "\n",
        "Context: {context}\n",
        "\n",
        "Python Code:\n",
        "```python\n",
        "{python_code}\n",
        "```\n",
        "\n",
        "C++ Translation:\"\"\"\n",
        "            \n",
        "            response = self.client.chat.completions.create(\n",
        "                model=\"gpt-4o\",\n",
        "                messages=[\n",
        "                    {\"role\": \"system\", \"content\": system_prompt},\n",
        "                    {\"role\": \"user\", \"content\": user_prompt}\n",
        "                ],\n",
        "                temperature=0.1,\n",
        "                max_tokens=4000\n",
        "            )\n",
        "            \n",
        "            translated_code = response.choices[0].message.content.strip()\n",
        "            translation_time = time.time() - start_time\n",
        "            \n",
        "            return TranslationResult(\n",
        "                source_code=python_code,\n",
        "                translated_code=translated_code,\n",
        "                model_name=\"GPT-4o\",\n",
        "                success=True,\n",
        "                translation_time=translation_time,\n",
        "                token_usage={\n",
        "                    \"prompt_tokens\": response.usage.prompt_tokens,\n",
        "                    \"completion_tokens\": response.usage.completion_tokens,\n",
        "                    \"total_tokens\": response.usage.total_tokens\n",
        "                }\n",
        "            )\n",
        "            \n",
        "        except Exception as e:\n",
        "            return TranslationResult(\n",
        "                source_code=python_code,\n",
        "                translated_code=\"\",\n",
        "                model_name=\"GPT-4o\",\n",
        "                success=False,\n",
        "                error_message=str(e),\n",
        "                translation_time=time.time() - start_time\n",
        "            )\n",
        "\n",
        "print(\"✅ OpenAI client implemented!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Anthropic Claude Client\n",
        "class ClaudeClient:\n",
        "    \"\"\"Anthropic Claude client for code translation.\"\"\"\n",
        "    \n",
        "    def __init__(self, api_key: str):\n",
        "        self.api_key = api_key\n",
        "        self.client = anthropic.Anthropic(api_key=api_key)\n",
        "    \n",
        "    def translate_python_to_cpp(self, python_code: str, context: str = \"\") -> TranslationResult:\n",
        "        \"\"\"Translate Python code to C++ using Claude 3.5 Sonnet.\"\"\"\n",
        "        start_time = time.time()\n",
        "        \n",
        "        try:\n",
        "            system_prompt = \"\"\"You are an expert Python to C++ translator. \n",
        "            Convert the given Python code to efficient, modern C++ code.\n",
        "            \n",
        "            Requirements:\n",
        "            - Use modern C++17/20 features\n",
        "            - Include proper headers\n",
        "            - Add comprehensive error handling\n",
        "            - Optimize for performance\n",
        "            - Include detailed comments\n",
        "            - Follow C++ best practices\n",
        "            \n",
        "            Return ONLY the C++ code, no explanations.\"\"\"\n",
        "            \n",
        "            user_prompt = f\"\"\"Translate this Python code to C++:\n",
        "\n",
        "Context: {context}\n",
        "\n",
        "Python Code:\n",
        "```python\n",
        "{python_code}\n",
        "```\n",
        "\n",
        "C++ Translation:\"\"\"\n",
        "            \n",
        "            response = self.client.messages.create(\n",
        "                model=\"claude-sonnet-4-20250514\",\n",
        "                max_tokens=4000,\n",
        "                temperature=0.1,\n",
        "                system=system_prompt,\n",
        "                messages=[\n",
        "                    {\"role\": \"user\", \"content\": user_prompt}\n",
        "                ]\n",
        "            )\n",
        "            \n",
        "            translated_code = response.content[0].text.strip()\n",
        "            translation_time = time.time() - start_time\n",
        "            \n",
        "            return TranslationResult(\n",
        "                source_code=python_code,\n",
        "                translated_code=translated_code,\n",
        "                model_name=\"Claude-3.5-Sonnet\",\n",
        "                success=True,\n",
        "                translation_time=translation_time,\n",
        "                token_usage={\n",
        "                    \"input_tokens\": response.usage.input_tokens,\n",
        "                    \"output_tokens\": response.usage.output_tokens\n",
        "                }\n",
        "            )\n",
        "            \n",
        "        except Exception as e:\n",
        "            return TranslationResult(\n",
        "                source_code=python_code,\n",
        "                translated_code=\"\",\n",
        "                model_name=\"Claude-3.5-Sonnet\",\n",
        "                success=False,\n",
        "                error_message=str(e),\n",
        "                translation_time=time.time() - start_time\n",
        "            )\n",
        "\n",
        "print(\"✅ Claude client implemented!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Google Gemini Client\n",
        "class GeminiClient:\n",
        "    \"\"\"Google Gemini client for code translation.\"\"\"\n",
        "    \n",
        "    def __init__(self, api_key: str):\n",
        "        self.api_key = api_key\n",
        "        genai.configure(api_key=api_key)\n",
        "        self.client = genai.GenerativeModel('gemini-2.0-flash-exp')\n",
        "    \n",
        "    def translate_python_to_cpp(self, python_code: str, context: str = \"\") -> TranslationResult:\n",
        "        \"\"\"Translate Python code to C++ using Gemini 2.0 Flash.\"\"\"\n",
        "        start_time = time.time()\n",
        "        \n",
        "        try:\n",
        "            prompt = f\"\"\"You are an expert Python to C++ translator. \n",
        "            Convert the given Python code to efficient, modern C++ code.\n",
        "            \n",
        "            Requirements:\n",
        "            - Use modern C++17/20 features\n",
        "            - Include proper headers\n",
        "            - Add comprehensive error handling\n",
        "            - Optimize for performance\n",
        "            - Include detailed comments\n",
        "            - Follow C++ best practices\n",
        "            \n",
        "            Context: {context}\n",
        "            \n",
        "            Python Code:\n",
        "            ```python\n",
        "            {python_code}\n",
        "            ```\n",
        "            \n",
        "            Return ONLY the C++ code, no explanations.\"\"\"\n",
        "            \n",
        "            response = self.client.generate_content(\n",
        "                prompt,\n",
        "                generation_config=genai.types.GenerationConfig(\n",
        "                    temperature=0.1,\n",
        "                    max_output_tokens=4000\n",
        "                )\n",
        "            )\n",
        "            \n",
        "            translated_code = response.text.strip()\n",
        "            translation_time = time.time() - start_time\n",
        "            \n",
        "            return TranslationResult(\n",
        "                source_code=python_code,\n",
        "                translated_code=translated_code,\n",
        "                model_name=\"Gemini-2.0-Flash\",\n",
        "                success=True,\n",
        "                translation_time=translation_time\n",
        "            )\n",
        "            \n",
        "        except Exception as e:\n",
        "            return TranslationResult(\n",
        "                source_code=python_code,\n",
        "                translated_code=\"\",\n",
        "                model_name=\"Gemini-2.0-Flash\",\n",
        "                success=False,\n",
        "                error_message=str(e),\n",
        "                translation_time=time.time() - start_time\n",
        "            )\n",
        "\n",
        "print(\"✅ Gemini client implemented!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# LLM Client Manager\n",
        "class LLMClientManager:\n",
        "    \"\"\"Manages multiple LLM clients for code translation.\"\"\"\n",
        "    \n",
        "    def __init__(self):\n",
        "        self.clients = {}\n",
        "        self._initialize_clients()\n",
        "    \n",
        "    def _initialize_clients(self):\n",
        "        \"\"\"Initialize available LLM clients.\"\"\"\n",
        "        # OpenAI\n",
        "        openai_key = os.getenv('OPENAI_API_KEY')\n",
        "        if openai_key:\n",
        "            self.clients['gpt'] = OpenAIClient(openai_key)\n",
        "        \n",
        "        # Anthropic Claude\n",
        "        claude_key = os.getenv('ANTHROPIC_API_KEY')\n",
        "        if claude_key:\n",
        "            self.clients['claude'] = ClaudeClient(claude_key)\n",
        "        \n",
        "        # Google Gemini\n",
        "        gemini_key = os.getenv('GOOGLE_API_KEY')\n",
        "        if gemini_key:\n",
        "            self.clients['gemini'] = GeminiClient(gemini_key)\n",
        "    \n",
        "    def get_available_models(self) -> List[str]:\n",
        "        \"\"\"Get list of available model names.\"\"\"\n",
        "        return list(self.clients.keys())\n",
        "    \n",
        "    def translate_with_all_models(self, python_code: str, context: str = \"\") -> Dict[str, TranslationResult]:\n",
        "        \"\"\"Translate code using all available models.\"\"\"\n",
        "        results = {}\n",
        "        \n",
        "        for model_name, client in self.clients.items():\n",
        "            try:\n",
        "                result = client.translate_python_to_cpp(python_code, context)\n",
        "                results[model_name] = result\n",
        "            except Exception as e:\n",
        "                results[model_name] = TranslationResult(\n",
        "                    source_code=python_code,\n",
        "                    translated_code=\"\",\n",
        "                    model_name=model_name,\n",
        "                    success=False,\n",
        "                    error_message=str(e)\n",
        "                )\n",
        "        \n",
        "        return results\n",
        "    \n",
        "    def translate_with_model(self, model_name: str, python_code: str, context: str = \"\") -> TranslationResult:\n",
        "        \"\"\"Translate code using a specific model.\"\"\"\n",
        "        if model_name not in self.clients:\n",
        "            raise ValueError(f\"Model {model_name} not available. Available models: {list(self.clients.keys())}\")\n",
        "        \n",
        "        return self.clients[model_name].translate_python_to_cpp(python_code, context)\n",
        "\n",
        "# Initialize LLM manager\n",
        "llm_manager = LLMClientManager()\n",
        "available_models = llm_manager.get_available_models()\n",
        "\n",
        "print(f\"✅ LLM Client Manager initialized!\")\n",
        "print(f\"Available models: {available_models}\")\n",
        "\n",
        "if not available_models:\n",
        "    print(\"⚠️ No LLM models available. Please check your API keys:\")\n",
        "    print(\"  - OPENAI_API_KEY\")\n",
        "    print(\"  - ANTHROPIC_API_KEY\") \n",
        "    print(\"  - GOOGLE_API_KEY\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. C++ Compiler and Testing\n",
        "\n",
        "Now let's implement the C++ compilation and testing functionality.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# C++ Compiler Implementation\n",
        "class CppCompiler:\n",
        "    \"\"\"Handles C++ compilation and testing.\"\"\"\n",
        "    \n",
        "    def __init__(self, compiler_path: str = \"g++\", optimization_level: str = \"-O2\"):\n",
        "        self.compiler_path = compiler_path\n",
        "        self.optimization_level = optimization_level\n",
        "        self.temp_dir = None\n",
        "    \n",
        "    def __enter__(self):\n",
        "        \"\"\"Context manager entry.\"\"\"\n",
        "        self.temp_dir = tempfile.mkdtemp(prefix=\"cpp_translator_\")\n",
        "        return self\n",
        "    \n",
        "    def __exit__(self, exc_type, exc_val, exc_tb):\n",
        "        \"\"\"Context manager exit - cleanup temp files.\"\"\"\n",
        "        if self.temp_dir and os.path.exists(self.temp_dir):\n",
        "            import shutil\n",
        "            shutil.rmtree(self.temp_dir, ignore_errors=True)\n",
        "    \n",
        "    def _write_cpp_file(self, cpp_code: str, filename: str = \"main.cpp\") -> str:\n",
        "        \"\"\"Write C++ code to a temporary file.\"\"\"\n",
        "        if not self.temp_dir:\n",
        "            raise RuntimeError(\"Compiler not initialized. Use as context manager.\")\n",
        "        \n",
        "        file_path = os.path.join(self.temp_dir, filename)\n",
        "        with open(file_path, 'w', encoding='utf-8') as f:\n",
        "            f.write(cpp_code)\n",
        "        return file_path\n",
        "    \n",
        "    def _add_standard_headers(self, cpp_code: str) -> str:\n",
        "        \"\"\"Add standard C++ headers if not present.\"\"\"\n",
        "        if \"#include\" not in cpp_code:\n",
        "            headers = [\n",
        "                \"#include <iostream>\",\n",
        "                \"#include <vector>\",\n",
        "                \"#include <string>\",\n",
        "                \"#include <algorithm>\",\n",
        "                \"#include <memory>\",\n",
        "                \"#include <stdexcept>\",\n",
        "                \"#include <chrono>\",\n",
        "                \"#include <thread>\"\n",
        "            ]\n",
        "            cpp_code = \"\\n\".join(headers) + \"\\n\\n\" + cpp_code\n",
        "        \n",
        "        return cpp_code\n",
        "    \n",
        "    def _add_main_function_if_needed(self, cpp_code: str) -> str:\n",
        "        \"\"\"Add main function if not present.\"\"\"\n",
        "        if \"int main(\" not in cpp_code and \"void main(\" not in cpp_code:\n",
        "            main_code = \"\"\"\n",
        "int main() {\n",
        "    try {\n",
        "        // Your code will be executed here\n",
        "        return 0;\n",
        "    } catch (const std::exception& e) {\n",
        "        std::cerr << \"Error: \" << e.what() << std::endl;\n",
        "        return 1;\n",
        "    }\n",
        "}\"\"\"\n",
        "            cpp_code += main_code\n",
        "        \n",
        "        return cpp_code\n",
        "    \n",
        "    def compile_cpp(self, cpp_code: str, output_name: str = \"main\") -> CompilationResult:\n",
        "        \"\"\"Compile C++ code to executable.\"\"\"\n",
        "        start_time = time.time()\n",
        "        \n",
        "        try:\n",
        "            # Preprocess the code\n",
        "            cpp_code = self._add_standard_headers(cpp_code)\n",
        "            cpp_code = self._add_main_function_if_needed(cpp_code)\n",
        "            \n",
        "            # Write to temporary file\n",
        "            cpp_file = self._write_cpp_file(cpp_code)\n",
        "            exe_path = os.path.join(self.temp_dir, output_name)\n",
        "            \n",
        "            # Compilation command\n",
        "            cmd = [\n",
        "                self.compiler_path,\n",
        "                self.optimization_level,\n",
        "                \"-std=c++17\",\n",
        "                \"-Wall\",\n",
        "                \"-Wextra\",\n",
        "                cpp_file,\n",
        "                \"-o\", exe_path\n",
        "            ]\n",
        "            \n",
        "            # Compile\n",
        "            result = subprocess.run(\n",
        "                cmd,\n",
        "                capture_output=True,\n",
        "                text=True,\n",
        "                timeout=30\n",
        "            )\n",
        "            \n",
        "            compilation_time = time.time() - start_time\n",
        "            \n",
        "            if result.returncode == 0:\n",
        "                return CompilationResult(\n",
        "                    success=True,\n",
        "                    executable_path=exe_path,\n",
        "                    compilation_time=compilation_time,\n",
        "                    warnings=self._extract_warnings(result.stderr)\n",
        "                )\n",
        "            else:\n",
        "                return CompilationResult(\n",
        "                    success=False,\n",
        "                    error_message=result.stderr,\n",
        "                    compilation_time=compilation_time\n",
        "                )\n",
        "                \n",
        "        except subprocess.TimeoutExpired:\n",
        "            return CompilationResult(\n",
        "                success=False,\n",
        "                error_message=\"Compilation timeout\",\n",
        "                compilation_time=time.time() - start_time\n",
        "            )\n",
        "        except Exception as e:\n",
        "            return CompilationResult(\n",
        "                success=False,\n",
        "                error_message=str(e),\n",
        "                compilation_time=time.time() - start_time\n",
        "            )\n",
        "    \n",
        "    def _extract_warnings(self, stderr: str) -> List[str]:\n",
        "        \"\"\"Extract warnings from compiler output.\"\"\"\n",
        "        warnings = []\n",
        "        for line in stderr.split('\\n'):\n",
        "            if 'warning:' in line.lower():\n",
        "                warnings.append(line.strip())\n",
        "        return warnings\n",
        "    \n",
        "    def execute_cpp(self, executable_path: str, input_data: str = \"\", timeout: int = 10) -> ExecutionResult:\n",
        "        \"\"\"Execute compiled C++ code.\"\"\"\n",
        "        start_time = time.time()\n",
        "        \n",
        "        try:\n",
        "            # Start process\n",
        "            process = subprocess.Popen(\n",
        "                [executable_path],\n",
        "                stdin=subprocess.PIPE,\n",
        "                stdout=subprocess.PIPE,\n",
        "                stderr=subprocess.PIPE,\n",
        "                text=True\n",
        "            )\n",
        "            \n",
        "            # Monitor memory usage\n",
        "            memory_usage = 0.0\n",
        "            try:\n",
        "                ps_process = psutil.Process(process.pid)\n",
        "                memory_usage = ps_process.memory_info().rss / 1024 / 1024  # MB\n",
        "            except (psutil.NoSuchProcess, psutil.AccessDenied):\n",
        "                pass\n",
        "            \n",
        "            # Execute with timeout\n",
        "            stdout, stderr = process.communicate(input=input_data, timeout=timeout)\n",
        "            execution_time = time.time() - start_time\n",
        "            \n",
        "            return ExecutionResult(\n",
        "                success=process.returncode == 0,\n",
        "                output=stdout,\n",
        "                error_message=stderr if stderr else None,\n",
        "                execution_time=execution_time,\n",
        "                memory_usage=memory_usage,\n",
        "                exit_code=process.returncode\n",
        "            )\n",
        "            \n",
        "        except subprocess.TimeoutExpired:\n",
        "            process.kill()\n",
        "            return ExecutionResult(\n",
        "                success=False,\n",
        "                error_message=\"Execution timeout\",\n",
        "                execution_time=time.time() - start_time\n",
        "            )\n",
        "        except Exception as e:\n",
        "            return ExecutionResult(\n",
        "                success=False,\n",
        "                error_message=str(e),\n",
        "                execution_time=time.time() - start_time\n",
        "            )\n",
        "    \n",
        "    def compile_and_test(self, cpp_code: str, test_input: str = \"\") -> Tuple[CompilationResult, Optional[ExecutionResult]]:\n",
        "        \"\"\"Compile and test C++ code.\"\"\"\n",
        "        # Compile\n",
        "        compilation_result = self.compile_cpp(cpp_code)\n",
        "        \n",
        "        if not compilation_result.success:\n",
        "            return compilation_result, None\n",
        "        \n",
        "        # Execute\n",
        "        execution_result = self.execute_cpp(compilation_result.executable_path, test_input)\n",
        "        \n",
        "        return compilation_result, execution_result\n",
        "\n",
        "print(\"✅ C++ Compiler implemented!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Code Quality Analyzer\n",
        "class CodeQualityAnalyzer:\n",
        "    \"\"\"Analyzes code quality metrics.\"\"\"\n",
        "    \n",
        "    @staticmethod\n",
        "    def analyze_cpp_quality(cpp_code: str) -> Dict[str, Any]:\n",
        "        \"\"\"Analyze C++ code quality.\"\"\"\n",
        "        metrics = {\n",
        "            \"lines_of_code\": len(cpp_code.split('\\n')),\n",
        "            \"comment_ratio\": CodeQualityAnalyzer._calculate_comment_ratio(cpp_code),\n",
        "            \"function_count\": CodeQualityAnalyzer._count_functions(cpp_code),\n",
        "            \"class_count\": CodeQualityAnalyzer._count_classes(cpp_code),\n",
        "            \"complexity_score\": CodeQualityAnalyzer._calculate_complexity(cpp_code),\n",
        "            \"style_score\": CodeQualityAnalyzer._calculate_style_score(cpp_code),\n",
        "            \"error_handling\": CodeQualityAnalyzer._check_error_handling(cpp_code),\n",
        "            \"modern_cpp_features\": CodeQualityAnalyzer._check_modern_features(cpp_code)\n",
        "        }\n",
        "        \n",
        "        return metrics\n",
        "    \n",
        "    @staticmethod\n",
        "    def _calculate_comment_ratio(cpp_code: str) -> float:\n",
        "        \"\"\"Calculate ratio of commented lines.\"\"\"\n",
        "        lines = cpp_code.split('\\n')\n",
        "        comment_lines = sum(1 for line in lines if line.strip().startswith('//') or line.strip().startswith('/*'))\n",
        "        return comment_lines / len(lines) if lines else 0.0\n",
        "    \n",
        "    @staticmethod\n",
        "    def _count_functions(cpp_code: str) -> int:\n",
        "        \"\"\"Count function definitions.\"\"\"\n",
        "        pattern = r'\\w+\\s+\\w+\\s*\\([^)]*\\)\\s*\\{'\n",
        "        return len(re.findall(pattern, cpp_code))\n",
        "    \n",
        "    @staticmethod\n",
        "    def _count_classes(cpp_code: str) -> int:\n",
        "        \"\"\"Count class definitions.\"\"\"\n",
        "        pattern = r'class\\s+\\w+'\n",
        "        return len(re.findall(pattern, cpp_code))\n",
        "    \n",
        "    @staticmethod\n",
        "    def _calculate_complexity(cpp_code: str) -> int:\n",
        "        \"\"\"Calculate cyclomatic complexity.\"\"\"\n",
        "        complexity_keywords = ['if', 'else', 'while', 'for', 'switch', 'case', 'catch', '&&', '||']\n",
        "        complexity = 1  # Base complexity\n",
        "        \n",
        "        for keyword in complexity_keywords:\n",
        "            complexity += cpp_code.count(keyword)\n",
        "        \n",
        "        return complexity\n",
        "    \n",
        "    @staticmethod\n",
        "    def _calculate_style_score(cpp_code: str) -> float:\n",
        "        \"\"\"Calculate style score based on various factors.\"\"\"\n",
        "        score = 0.0\n",
        "        lines = cpp_code.split('\\n')\n",
        "        \n",
        "        # Check for consistent indentation\n",
        "        if all(line.startswith((' ', '\\t')) or not line.strip() for line in lines[1:]):\n",
        "            score += 0.2\n",
        "        \n",
        "        # Check for proper spacing\n",
        "        if re.search(r'\\w\\(\\w', cpp_code):  # Functions with proper spacing\n",
        "            score += 0.2\n",
        "        \n",
        "        # Check for const correctness\n",
        "        if 'const' in cpp_code:\n",
        "            score += 0.2\n",
        "        \n",
        "        # Check for RAII usage\n",
        "        if 'std::unique_ptr' in cpp_code or 'std::shared_ptr' in cpp_code:\n",
        "            score += 0.2\n",
        "        \n",
        "        # Check for proper includes\n",
        "        if '#include' in cpp_code:\n",
        "            score += 0.2\n",
        "        \n",
        "        return min(score, 1.0)\n",
        "    \n",
        "    @staticmethod\n",
        "    def _check_error_handling(cpp_code: str) -> bool:\n",
        "        \"\"\"Check if code has proper error handling.\"\"\"\n",
        "        return 'try' in cpp_code and 'catch' in cpp_code\n",
        "    \n",
        "    @staticmethod\n",
        "    def _check_modern_features(cpp_code: str) -> List[str]:\n",
        "        \"\"\"Check for modern C++ features.\"\"\"\n",
        "        features = []\n",
        "        \n",
        "        if 'auto' in cpp_code:\n",
        "            features.append('auto')\n",
        "        if 'std::unique_ptr' in cpp_code:\n",
        "            features.append('smart_pointers')\n",
        "        if 'std::vector' in cpp_code:\n",
        "            features.append('stl_containers')\n",
        "        if 'lambda' in cpp_code or '[]' in cpp_code:\n",
        "            features.append('lambdas')\n",
        "        if 'std::thread' in cpp_code:\n",
        "            features.append('threading')\n",
        "        \n",
        "        return features\n",
        "\n",
        "print(\"✅ Code Quality Analyzer implemented!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. Core Translation Logic\n",
        "\n",
        "Now let's implement the main translation logic that coordinates all components.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Translation Comparison Data Class\n",
        "@dataclass\n",
        "class TranslationComparison:\n",
        "    \"\"\"Comparison of translations across different models.\"\"\"\n",
        "    model_results: Dict[str, TranslationResult]\n",
        "    compilation_results: Dict[str, CompilationResult]\n",
        "    execution_results: Dict[str, ExecutionResult]\n",
        "    performance_metrics: Dict[str, PerformanceMetrics]\n",
        "    quality_scores: Dict[str, Dict[str, Any]]\n",
        "    best_model: Optional[str] = None\n",
        "    comparison_summary: Optional[str] = None\n",
        "\n",
        "# Main Code Translator\n",
        "class CodeTranslator:\n",
        "    \"\"\"Main translator class that coordinates the entire translation process.\"\"\"\n",
        "    \n",
        "    def __init__(self):\n",
        "        self.llm_manager = LLMClientManager()\n",
        "        self.available_models = self.llm_manager.get_available_models()\n",
        "        \n",
        "        if not self.available_models:\n",
        "            print(\"⚠️ No LLM models available. Please check your API keys.\")\n",
        "    \n",
        "    def translate_python_to_cpp(self, python_code: str, context: str = \"\", \n",
        "                               test_input: str = \"\", use_all_models: bool = True) -> TranslationComparison:\n",
        "        \"\"\"Translate Python code to C++ using available models.\"\"\"\n",
        "        \n",
        "        if use_all_models:\n",
        "            # Translate with all available models\n",
        "            translation_results = self.llm_manager.translate_with_all_models(python_code, context)\n",
        "        else:\n",
        "            # Use first available model\n",
        "            model_name = self.available_models[0]\n",
        "            result = self.llm_manager.translate_with_model(model_name, python_code, context)\n",
        "            translation_results = {model_name: result}\n",
        "        \n",
        "        # Compile and test each translation\n",
        "        compilation_results = {}\n",
        "        execution_results = {}\n",
        "        performance_metrics = {}\n",
        "        quality_scores = {}\n",
        "        \n",
        "        with CppCompiler() as compiler:\n",
        "            for model_name, translation_result in translation_results.items():\n",
        "                if not translation_result.success:\n",
        "                    continue\n",
        "                \n",
        "                # Compile and test\n",
        "                comp_result, exec_result = compiler.compile_and_test(\n",
        "                    translation_result.translated_code, \n",
        "                    test_input\n",
        "                )\n",
        "                \n",
        "                compilation_results[model_name] = comp_result\n",
        "                if exec_result:\n",
        "                    execution_results[model_name] = exec_result\n",
        "                \n",
        "                # Get performance metrics\n",
        "                perf_metrics = self._get_performance_metrics(compiler, translation_result.translated_code, test_input)\n",
        "                if perf_metrics:\n",
        "                    performance_metrics[model_name] = perf_metrics\n",
        "                \n",
        "                # Analyze code quality\n",
        "                quality_scores[model_name] = CodeQualityAnalyzer.analyze_cpp_quality(\n",
        "                    translation_result.translated_code\n",
        "                )\n",
        "        \n",
        "        # Determine best model\n",
        "        best_model = self._determine_best_model(\n",
        "            translation_results, compilation_results, execution_results, \n",
        "            performance_metrics, quality_scores\n",
        "        )\n",
        "        \n",
        "        # Generate comparison summary\n",
        "        comparison_summary = self._generate_comparison_summary(\n",
        "            translation_results, compilation_results, execution_results,\n",
        "            performance_metrics, quality_scores, best_model\n",
        "        )\n",
        "        \n",
        "        return TranslationComparison(\n",
        "            model_results=translation_results,\n",
        "            compilation_results=compilation_results,\n",
        "            execution_results=execution_results,\n",
        "            performance_metrics=performance_metrics,\n",
        "            quality_scores=quality_scores,\n",
        "            best_model=best_model,\n",
        "            comparison_summary=comparison_summary\n",
        "        )\n",
        "    \n",
        "    def _get_performance_metrics(self, compiler: CppCompiler, cpp_code: str, test_input: str = \"\") -> Optional[PerformanceMetrics]:\n",
        "        \"\"\"Get comprehensive performance metrics.\"\"\"\n",
        "        compilation_result, execution_result = compiler.compile_and_test(cpp_code, test_input)\n",
        "        \n",
        "        if not compilation_result.success or not execution_result or not execution_result.success:\n",
        "            return None\n",
        "        \n",
        "        # Get code size\n",
        "        cpp_file = compiler._write_cpp_file(cpp_code)\n",
        "        code_size = os.path.getsize(cpp_file)\n",
        "        \n",
        "        # Get executable size\n",
        "        exe_size = 0\n",
        "        if compilation_result.executable_path and os.path.exists(compilation_result.executable_path):\n",
        "            exe_size = os.path.getsize(compilation_result.executable_path)\n",
        "        \n",
        "        return PerformanceMetrics(\n",
        "            execution_time=execution_result.execution_time,\n",
        "            memory_usage=execution_result.memory_usage,\n",
        "            cpu_usage=0.0,  # Would need more complex monitoring\n",
        "            code_size=code_size,\n",
        "            compilation_time=compilation_result.compilation_time\n",
        "        )\n",
        "    \n",
        "    def _determine_best_model(self, translation_results: Dict[str, TranslationResult],\n",
        "                            compilation_results: Dict[str, CompilationResult],\n",
        "                            execution_results: Dict[str, ExecutionResult],\n",
        "                            performance_metrics: Dict[str, PerformanceMetrics],\n",
        "                            quality_scores: Dict[str, Dict[str, Any]]) -> Optional[str]:\n",
        "        \"\"\"Determine the best model based on multiple criteria.\"\"\"\n",
        "        \n",
        "        scores = {}\n",
        "        \n",
        "        for model_name in translation_results.keys():\n",
        "            score = 0.0\n",
        "            \n",
        "            # Translation success (40% weight)\n",
        "            if translation_results[model_name].success:\n",
        "                score += 0.4\n",
        "            \n",
        "            # Compilation success (30% weight)\n",
        "            if model_name in compilation_results and compilation_results[model_name].success:\n",
        "                score += 0.3\n",
        "            \n",
        "            # Execution success (20% weight)\n",
        "            if model_name in execution_results and execution_results[model_name].success:\n",
        "                score += 0.2\n",
        "            \n",
        "            # Performance (5% weight)\n",
        "            if model_name in performance_metrics:\n",
        "                # Lower execution time is better\n",
        "                exec_time = performance_metrics[model_name].execution_time\n",
        "                if exec_time > 0:\n",
        "                    score += 0.05 * (1.0 / (1.0 + exec_time))\n",
        "            \n",
        "            # Code quality (5% weight)\n",
        "            if model_name in quality_scores:\n",
        "                quality = quality_scores[model_name]\n",
        "                style_score = quality.get('style_score', 0.0)\n",
        "                score += 0.05 * style_score\n",
        "            \n",
        "            scores[model_name] = score\n",
        "        \n",
        "        if scores:\n",
        "            return max(scores, key=scores.get)\n",
        "        return None\n",
        "    \n",
        "    def _generate_comparison_summary(self, translation_results: Dict[str, TranslationResult],\n",
        "                                   compilation_results: Dict[str, CompilationResult],\n",
        "                                   execution_results: Dict[str, ExecutionResult],\n",
        "                                   performance_metrics: Dict[str, PerformanceMetrics],\n",
        "                                   quality_scores: Dict[str, Dict[str, Any]],\n",
        "                                   best_model: Optional[str]) -> str:\n",
        "        \"\"\"Generate a summary of the comparison.\"\"\"\n",
        "        \n",
        "        summary_parts = []\n",
        "        \n",
        "        # Overall success rates\n",
        "        successful_translations = sum(1 for r in translation_results.values() if r.success)\n",
        "        successful_compilations = sum(1 for r in compilation_results.values() if r.success)\n",
        "        successful_executions = sum(1 for r in execution_results.values() if r.success)\n",
        "        \n",
        "        summary_parts.append(f\"Translation Success: {successful_translations}/{len(translation_results)}\")\n",
        "        summary_parts.append(f\"Compilation Success: {successful_compilations}/{len(compilation_results)}\")\n",
        "        summary_parts.append(f\"Execution Success: {successful_executions}/{len(execution_results)}\")\n",
        "        \n",
        "        # Best model\n",
        "        if best_model:\n",
        "            summary_parts.append(f\"Best Model: {best_model}\")\n",
        "            \n",
        "            # Best model details\n",
        "            if best_model in performance_metrics:\n",
        "                perf = performance_metrics[best_model]\n",
        "                summary_parts.append(f\"Best Model Performance:\")\n",
        "                summary_parts.append(f\"  - Execution Time: {perf.execution_time:.4f}s\")\n",
        "                summary_parts.append(f\"  - Memory Usage: {perf.memory_usage:.2f}MB\")\n",
        "                summary_parts.append(f\"  - Compilation Time: {perf.compilation_time:.4f}s\")\n",
        "        \n",
        "        # Quality comparison\n",
        "        if quality_scores:\n",
        "            summary_parts.append(\"Quality Scores:\")\n",
        "            for model, scores in quality_scores.items():\n",
        "                summary_parts.append(f\"  {model}:\")\n",
        "                summary_parts.append(f\"    - Lines of Code: {scores.get('lines_of_code', 0)}\")\n",
        "                summary_parts.append(f\"    - Comment Ratio: {scores.get('comment_ratio', 0):.2%}\")\n",
        "                summary_parts.append(f\"    - Style Score: {scores.get('style_score', 0):.2f}\")\n",
        "                summary_parts.append(f\"    - Complexity: {scores.get('complexity_score', 0)}\")\n",
        "        \n",
        "        return \"\\n\".join(summary_parts)\n",
        "\n",
        "# Initialize the translator\n",
        "translator = CodeTranslator()\n",
        "print(f\"✅ Code Translator initialized!\")\n",
        "print(f\"Available models: {translator.available_models}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. Interactive Examples\n",
        "\n",
        "Let's test the translator with some example Python code!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Example 1: Simple Fibonacci Function\n",
        "python_code_1 = \"\"\"\n",
        "def fibonacci(n):\n",
        "    if n <= 1:\n",
        "        return n\n",
        "    return fibonacci(n-1) + fibonacci(n-2)\n",
        "\n",
        "def main():\n",
        "    print(\"Fibonacci sequence:\")\n",
        "    for i in range(10):\n",
        "        result = fibonacci(i)\n",
        "        print(f\"fibonacci({i}) = {result}\")\n",
        "\n",
        "if __name__ == \"__main__\":\n",
        "    main()\n",
        "\"\"\"\n",
        "\n",
        "print(\"📝 Example 1: Fibonacci Function\")\n",
        "print(\"=\" * 50)\n",
        "print(python_code_1)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test the translation\n",
        "if translator.available_models:\n",
        "    print(\"🔄 Translating Python code to C++...\")\n",
        "    print(\"This may take a few moments...\")\n",
        "    \n",
        "    try:\n",
        "        comparison = translator.translate_python_to_cpp(\n",
        "            python_code_1, \n",
        "            \"Fibonacci sequence generator\",\n",
        "            use_all_models=True\n",
        "        )\n",
        "        \n",
        "        print(f\"✅ Translation completed!\")\n",
        "        print(f\"🏆 Best model: {comparison.best_model}\")\n",
        "        print(f\"📊 Models used: {len(comparison.model_results)}\")\n",
        "        \n",
        "        # Show results for each model\n",
        "        for model_name, result in comparison.model_results.items():\n",
        "            status = \"✅ Success\" if result.success else \"❌ Failed\"\n",
        "            print(f\"\\n{model_name}: {status}\")\n",
        "            if result.success:\n",
        "                print(f\"  Translation time: {result.translation_time:.2f}s\")\n",
        "                if result.token_usage:\n",
        "                    print(f\"  Token usage: {result.token_usage}\")\n",
        "        \n",
        "        # Show compilation results\n",
        "        if comparison.compilation_results:\n",
        "            print(f\"\\n🔨 Compilation Results:\")\n",
        "            for model_name, comp_result in comparison.compilation_results.items():\n",
        "                status = \"✅ Compiled\" if comp_result.success else \"❌ Failed\"\n",
        "                print(f\"  {model_name}: {status}\")\n",
        "        \n",
        "        # Show execution results\n",
        "        if comparison.execution_results:\n",
        "            print(f\"\\n⚡ Execution Results:\")\n",
        "            for model_name, exec_result in comparison.execution_results.items():\n",
        "                status = \"✅ Executed\" if exec_result.success else \"❌ Failed\"\n",
        "                print(f\"  {model_name}: {status}\")\n",
        "                if exec_result.success and exec_result.output:\n",
        "                    print(f\"    Output: {exec_result.output.strip()}\")\n",
        "        \n",
        "    except Exception as e:\n",
        "        print(f\"❌ Translation failed: {e}\")\n",
        "else:\n",
        "    print(\"⚠️ No LLM models available. Please set your API keys.\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Display the best C++ code\n",
        "if 'comparison' in locals() and comparison.best_model:\n",
        "    best_result = comparison.model_results[comparison.best_model]\n",
        "    print(f\"🏆 Best C++ Code (from {comparison.best_model}):\")\n",
        "    print(\"=\" * 60)\n",
        "    print(best_result.translated_code)\n",
        "    \n",
        "    # Show quality metrics\n",
        "    if comparison.best_model in comparison.quality_scores:\n",
        "        quality = comparison.quality_scores[comparison.best_model]\n",
        "        print(f\"\\n📊 Quality Metrics:\")\n",
        "        print(f\"  Lines of code: {quality.get('lines_of_code', 0)}\")\n",
        "        print(f\"  Comment ratio: {quality.get('comment_ratio', 0):.2%}\")\n",
        "        print(f\"  Style score: {quality.get('style_score', 0):.2f}\")\n",
        "        print(f\"  Complexity: {quality.get('complexity_score', 0)}\")\n",
        "        print(f\"  Modern features: {quality.get('modern_cpp_features', [])}\")\n",
        "    \n",
        "    # Show performance metrics\n",
        "    if comparison.best_model in comparison.performance_metrics:\n",
        "        perf = comparison.performance_metrics[comparison.best_model]\n",
        "        print(f\"\\n⚡ Performance Metrics:\")\n",
        "        print(f\"  Execution time: {perf.execution_time:.4f}s\")\n",
        "        print(f\"  Memory usage: {perf.memory_usage:.2f}MB\")\n",
        "        print(f\"  Compilation time: {perf.compilation_time:.4f}s\")\n",
        "        print(f\"  Code size: {perf.code_size} bytes\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 6. Additional Examples\n",
        "\n",
        "Let's try a more complex example with classes and algorithms.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Example 2: Calculator Class\n",
        "python_code_2 = \"\"\"\n",
        "class Calculator:\n",
        "    def __init__(self):\n",
        "        self.history = []\n",
        "    \n",
        "    def add(self, a, b):\n",
        "        result = a + b\n",
        "        self.history.append(f\"{a} + {b} = {result}\")\n",
        "        return result\n",
        "    \n",
        "    def multiply(self, a, b):\n",
        "        result = a * b\n",
        "        self.history.append(f\"{a} * {b} = {result}\")\n",
        "        return result\n",
        "    \n",
        "    def get_history(self):\n",
        "        return self.history\n",
        "\n",
        "def main():\n",
        "    calc = Calculator()\n",
        "    print(\"Calculator Demo\")\n",
        "    print(calc.add(5, 3))\n",
        "    print(calc.multiply(4, 7))\n",
        "    print(\"History:\", calc.get_history())\n",
        "\n",
        "if __name__ == \"__main__\":\n",
        "    main()\n",
        "\"\"\"\n",
        "\n",
        "print(\"📝 Example 2: Calculator Class\")\n",
        "print(\"=\" * 50)\n",
        "print(python_code_2)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test the second example\n",
        "if translator.available_models:\n",
        "    print(\"🔄 Translating Calculator class...\")\n",
        "    \n",
        "    try:\n",
        "        comparison2 = translator.translate_python_to_cpp(\n",
        "            python_code_2, \n",
        "            \"Calculator class with history tracking\",\n",
        "            use_all_models=True\n",
        "        )\n",
        "        \n",
        "        print(f\"✅ Translation completed!\")\n",
        "        print(f\"🏆 Best model: {comparison2.best_model}\")\n",
        "        \n",
        "        # Show summary\n",
        "        print(f\"\\n📊 Summary:\")\n",
        "        print(comparison2.comparison_summary)\n",
        "        \n",
        "    except Exception as e:\n",
        "        print(f\"❌ Translation failed: {e}\")\n",
        "else:\n",
        "    print(\"⚠️ No LLM models available.\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 7. Summary and Results\n",
        "\n",
        "This notebook demonstrates a comprehensive AI-powered code translation system that:\n",
        "\n",
        "### Key Achievements:\n",
        "- **Multi-LLM Support**: Successfully integrates OpenAI GPT, Anthropic Claude, and Google Gemini\n",
        "- **C++ Compilation**: Automatically compiles and tests generated C++ code\n",
        "- **Quality Analysis**: Provides detailed code quality metrics and performance benchmarking\n",
        "- **Model Comparison**: Compares translation results across different AI models\n",
        "- **Error Handling**: Robust error handling with detailed diagnostics\n",
        "\n",
        "### Use Cases:\n",
        "- **Learning C++**: Translate Python code to learn C++ equivalents\n",
        "- **Code Migration**: Convert Python projects to C++ for performance\n",
        "- **Educational Tool**: Compare different AI models' translation quality\n",
        "- **Performance Analysis**: Benchmark Python vs C++ implementations\n",
        "\n",
        "### Next Steps:\n",
        "1. Set up your API keys for OpenAI, Anthropic, and Google\n",
        "2. Run the notebook cells to test the translation system\n",
        "3. Experiment with your own Python code\n",
        "4. Compare results across different AI models\n",
        "5. Analyze code quality and performance metrics\n",
        "\n",
        "**Happy coding! 🎉**\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "rom "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": ".venv",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.12"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
