# %% [markdown]
# # Competitive Programming
# 
# In this tutorial, you will build a computing olympiad agent that leverages three complementary techniques to boost performance: **reflection**, **retrieval**, and **human-in-the-loop** collaboration. These techniques and data are all adapted from the paper "Can Language Models Solve Olympiad Programming?" by Quan Shi, Michael Tang, Karthik Narasimhan, and Shunyu Yao. You can check out their paper at the following link:
# 
# [![arXiv](http://img.shields.io/badge/cs.CL-arXiv%3A2404.10952v1-B31B1B.svg)](https://arxiv.org/abs/2404.10952v1)
# 
# You will construct an agentic graph capable of answering programming questions of increasing difficulty.
# 
# 1. **Reflection**: In part 1, you will create a zero-shot tool calling agent and prompt it to reflect on the test case results to correct its initial errors. This is similar to the agent the paper reported as having a pass rate of 12.38 on the USACO benchmark.
# 2. **Retrieval**: In Part 2, you will implement an initial retrieval step as "episodic memory" for the agent that retrieves high-quality few-shot examples from our corpora of programming problems to help solve the **bronze** level question. This agent is similar to the one the paper benchmarked at 20.2.
# 3. **Human-in-the-loop**: In part 3, you will use `interrupt_after` to let the user copilot the agent to a better answer. The benchmark performance then is constrained only by the competitiveness of the human it is paired with.
# 
# Your final agent graph will be structured like the diagram below:
# 
# ![diagram](./img/diagram.png)
# 
# Parts 1 and 2 are analogous to the systems benchmarked in the paper as having a pass rate of 12.38 and 20.2 respectively.
# 
# ![Benchmark system results](./img/benchmark.png)
# 
# 
# While LLMs are not yet capable of autonomously solving all these problems, we can design the system that far surpasses the capabilities of a basic ReAct agent at answering these questions. 
# 
# Before diving in, let's set up our machine. This will involve installing dependencies, fetching the dataset, and defining a utility function.
# 
# ## Setup
# 
# For this tutorial, we will need to install some dependencies, fetch the Olympiad dataset, and define a utility function to help run the candidate solutions to see if they pass the test cases.
# 
# First, let's install the required packages and set our API keys

# %%
# %%capture --no-stderr
# %pip install -U langgraph langsmith langchain_anthropic datasets langchain langchainhub

# %%
import getpass
import os

from dotenv import load_dotenv; load_dotenv()

def _get_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


# _get_env("ANTHROPIC_API_KEY")

# %% [markdown]
# <div class="admonition tip">
#     <p class="admonition-title">Set up <a href="https://smith.langchain.com">LangSmith</a> for LangGraph development</p>
#     <p style="padding-top: 5px;">
#         Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started <a href="https://docs.smith.langchain.com">here</a>. 
#     </p>
# </div>    

# %% [markdown]
# #### Data
# 
# Fetch the USACO benchmark data using the util below:

# %%
import os
import zipfile

import datasets
from datasets import Dataset
import requests

def download_usaco_dataset(extract_path :str = "usaco_datasets"):
    print("Downloading USACO dataset...")
    usaco_url = "https://storage.googleapis.com/benchmarks-artifacts/usaco/usaco_sampled_with_tests.zip"
    zip_path = "usaco_sampled_with_tests.zip"

    response = requests.get(usaco_url)
    with open(zip_path, "wb") as file:
        file.write(response.content)

    with zipfile.ZipFile(zip_path, "r") as zip_ref:
        zip_ref.extractall(extract_path)
    os.remove(zip_path)

def load_dataset(extract_path : str = "usaco_datasets") -> Dataset:
    # 获取当前文件路径
    current_path = os.path.dirname(os.path.abspath(__file__))
    ds = datasets.load_from_disk(os.path.join(current_path, extract_path, "usaco_v3_sampled_with_tests"))
    return ds

def convert_inputs(ds:Dataset) -> list:
    """
    convert the dataset into inputs our graph will accept.
    """
    input_states = [
        {
            "messages": [("user", row["description"])],
            "test_cases": row["test_cases"],
            "runtime_limit": row["runtime_limit"],
            "status": "in_progress",
            "problem_level": row["problem_level"],
        }
        for row in ds
    ]
    return input_states

# %% [markdown]
# #### Test Evaluation Utils
# 
# We also need a way to evaluate our generated code. We will use this unsafe code execution program to run the generated code against our test cases.
# **Note:** The code below runs arbitrary code on your local machine! Proceed with caution.

# %%
import multiprocessing
from multiprocessing import Manager, Process
import queue
import subprocess
import sys
import time
import traceback

# WARNING
# This program exists to execute untrusted model-generated code. Although
# it is highly unlikely that model-generated code will do something overtly
# malicious in response to this test suite, model-generated code may act
# destructively due to a lack of model capability or alignment.
# Users are strongly encouraged to sandbox this evaluation suite so that it
# does not perform destructive actions on their host or network.
# Proceed at your own risk:


def exec_program(q, program, input_data, expected_output, timeout):
    try:
        start_time = time.time()
        process = subprocess.Popen(
            [sys.executable, "-c", program],
            stdin=subprocess.PIPE,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            text=True,
        )
        stdout, stderr = process.communicate(input=input_data, timeout=timeout)
        # print(f"stdout: {stdout}")
        # print(f"stderr: {stderr}")
        if time.time() - start_time > timeout:
            raise TimeoutError("Execution timed out.")
        if process.returncode != 0:
            q.put(f"failed: {stderr}")
        else:
            if stdout.strip() == expected_output.strip():
                q.put("passed")
            else:
                q.put(f"wrong answer. Expected '{expected_output}', got '{stdout}'")
    except subprocess.TimeoutExpired:
        process.kill()
        q.put("timed out")
    except Exception:
        q.put(f"failed: {traceback.format_exc()}")


def check_correctness(
    program: str, input_data: str, expected_output: str, timeout: float
) -> str:
    manager = Manager()
    q = manager.Queue()
    process = Process(
        target=exec_program, args=(q, program, input_data, expected_output, timeout)
    )
    process.start()
    process.join(timeout=timeout + 1)
    if process.is_alive():
        process.terminate()
        process.join()
        result = "timed out"
    else:
        try:
            # result = q.get(timeout=1)
            result = q.get_nowait()
        except queue.Empty:
            result = "no result returned"
    return result

# %% [markdown]
# Let's check an example program and output to see how it works:

# %%
# import freeze_support  # 只有在 Windows 上才需要这一行
# freeze_support()
def test_check_correctness():
    program_code = "print('hello, world!')"
    input_data = ""
    expected_output = "hello, world!"
    timeout = 120

    test_result = check_correctness(program_code, input_data, expected_output, timeout)
    print("Example 1: ", test_result)
    test_result = check_correctness("print('goodbye')", input_data, "hi there", timeout)
    print("Example 2: ", test_result)

from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import AnyMessage, add_messages

class TestCase(TypedDict):
    inputs: str
    outputs: str

class State(TypedDict):
    # Append-only chat memory so the agent can try to recover from initial mistakes.
    messages: Annotated[list[AnyMessage], add_messages]
    # From the dataset. These are used for testing.
    test_cases: list[TestCase]
    runtime_limit: int
    status: str
    timeout: int

from langchain_core.language_models import BaseChatModel
from langchain_core.prompts import ChatPromptTemplate
# from langchain_core.pydantic_v1 import BaseModel, Field
from pydantic import BaseModel, Field


class writePython(BaseModel):
    """Write python code that resolves the problem."""

    reasoning: str = Field(..., description="Conceptual solution.")
    pseudocode: str = Field(..., description="Detailed English pseudocode.")
    code: str = Field(..., description="Valid Python 3 solution to the problem")


class Solver:
    def __init__(self, llm: BaseChatModel, prompt: ChatPromptTemplate):
        self.runnable = prompt | llm.bind_tools([writePython])

    def __call__(self, state: State) -> dict:
        # Our agent only can see the "messages" and will ignore the test info
        return {"messages": [self.runnable.invoke({"messages": state["messages"]})]}

from langchain import hub
from langchain_core.language_models import BaseChatModel
from langchain_openai import ChatOpenAI

def test_solver(llm, prompt):
    solver = Solver(llm, prompt)
    print("*" * 34 + " Example " + "*" * 34)
    result = solver(
        {
            "messages": [
                (
                    "user",
                    "How do I get a perfectly random sample from an infinite stream",
                )
            ]
        }
    )
    result["messages"][0].pretty_print()


from langchain_core.messages import AIMessage, HumanMessage, ToolMessage

# This is the node we will add to the graph.
# Most tool-calling APIs require that the `ToolMessage` contain the ID
def format_tool_message(response: str, ai_message: AIMessage):
    return ToolMessage(
        content=response + "\nMake all fixes using the writePython tool.",
        tool_call_id=ai_message.tool_calls[0]["id"],
    )

def evaluate(state: State):
    test_cases = state["test_cases"]
    ai_message: AIMessage = state["messages"][-1]
    timeout = state["timeout"]
    if not ai_message.tool_calls:
        return {
            "messages": [
                HumanMessage(
                    content="No code submitted. Please try again using the correct python code."
                )
            ]
        }
    try:
        code = ai_message.tool_calls[0]["args"]["code"]
    except Exception as e:
        return {"messages": [format_tool_message(repr(e), ai_message)]}
    num_test_cases = len(test_cases)
    succeeded = 0
    test_results = []
    # TODO: Multiprocess
    for test_case in test_cases:
        input_data = test_case["inputs"]
        expected_output = test_case["outputs"]
        test_result = check_correctness(code, input_data, expected_output, timeout)
        test_results.append(test_result)
        if test_result == "passed":
            succeeded += 1
    pass_rate = succeeded / num_test_cases if num_test_cases else "N/A"
    if pass_rate == 1:
        return {"status": "success"}

    responses = "\n".join(
        [f"<test id={i}>\n{r}\n</test>" for i, r in enumerate(test_results)]
    )
    response = f"Incorrect submission. Please respond with updated code.\nPass rate: {succeeded}/{num_test_cases}\nResults:\n{responses}"
    formatted_message = format_tool_message(response, ai_message)
    return {"messages": [formatted_message]}

from langgraph.graph import END, StateGraph, START
from langgraph.graph.state import CompiledStateGraph

def control_edge(state: State):
    if state.get("status") == "success":
        return END
    return "solver"

# #### Create Graph
# 
# Now, put it all together! Once you've defined each node, defining the connectivity / state transitions is fairly easy.
# 
# Our Zero-shot graph defines a loop. If we visualize the data flow, we want the logic to:
# 1. First go to the `solver`, which attempts a first solution.
# 2. Next go to the `evaluate` node, which tests the solution.
# 3. If the solution passes, end, otherwise, return to the `solver` to try again.
# 
# In LangGraph, we use `conditional_edges` to define state transitions that contain conditional logic.
# Below, define the graph, adding a `control_edge` to handle step (3) above.
def create_graph(llm, prompt) -> CompiledStateGraph:
    solver = Solver(llm, prompt)
    builder = StateGraph(State)
    builder.add_node("solver", solver)
    builder.add_edge(START, "solver")
    builder.add_node("evaluate", evaluate)
    builder.add_edge("solver", "evaluate")
    builder.add_conditional_edges("evaluate", control_edge, {END: END, "solver": "solver"})
    graph = builder.compile()

    try:
        # display(Image(graph.get_graph().draw_mermaid_png()))
        graph.get_graph().draw_mermaid_png(output_file_path=os.path.dirname(__file__) + "/" + os.path.basename(__file__).split(".")[0] + "_graph.png")
    except Exception:
        # This requires some extra dependencies and is optional
        pass

    return graph

def zero_shot_example(graph, input_states):
    # Now that we've created our graph, let's see the type of question it will have to solve.

    input_state = input_states[0].copy()
    # We will reduce the test cases to speed this notebook up
    input_state["test_cases"] = input_state["test_cases"][:3]
    print(input_state["messages"][0][1])

    input_state["timeout"] = 60

    # Pretty difficult! Let's run our simple "zero-shot" agent below to see how it fares. **It most likely will not be able to solve this question** (unless you are using a more powerful model than what I had available at the time of writing this tutorial (2024/04/20).
    # We will trace the trajectory to LangSmith to review the series of submissions. To reduce the packet size, we will use "`hide_inputs`" and filter out the test_cases. All this is optional but useful for development. 
    # 
    # **Note:** We _expect_ a **GraphRecursionError** here from it not being able to answer it correctly in the allocated number of steps.
    from langchain_core.tracers.context import tracing_v2_enabled
    from langsmith import Client

    # We don't need to include all the test cases in our traces.
    def _hide_test_cases(inputs):
        copied = inputs.copy()
        # These are tens of MB in size. No need to send them up
        copied["test_cases"] = "..."
        return copied

    client = Client(hide_inputs=_hide_test_cases, hide_outputs=_hide_test_cases)
    with tracing_v2_enabled(client=client):
        events = graph.stream(input_state)
        for event in events:
            for value in event.values():
                messages = value.get("messages")
                if messages:
                    if isinstance(messages, list):
                        messages = value["messages"][-1]
                    print(
                        "Assistant:",
                        str(messages.content).replace("\n", "\\n")[:50],
                    )

if __name__ == "__main__":
    # multiprocessing.set_start_method("fork", force=True)
    # 设置启动方法为 'spawn'
    multiprocessing.set_start_method('spawn', force=True)
    multiprocessing.freeze_support()

	# 测试多进程方式模拟沙盒
    # test_check_correctness()

	# 下载数据集
    # download_usaco_dataset()
	# 加载数据集
    ds = load_dataset()
    input_states = convert_inputs(ds)

    # For this section, we are testing zero-shot performance and won't have
    # any examples. Partial them out to pre-fill the template.
    prompt = hub.pull("wfh/usaco-draft-solver").partial(examples="")
    print("*" * 35 + "Prompt" + "*" * 35)
    prompt.pretty_print()

    # 本地千问模型 
    # llm = ChatOpenAI(model="qwen2:72b", api_key="ollama", base_url="http://chat.192.168.107.2.nip.io/v1/")
    # 本地百川模型
    # llm = ChatOpenAI(model="Baichuan2", api_key="123", base_url="http://bc.192.168.107.2.nip.io/v1")
    # DEEPSEEK幻方大模型
    # llm = ChatOpenAI(model="deepseek-chat", api_key="sk-08a600a05b724252ae7c1320812447eb", base_url="https://api.deepseek.com/v1")
    # 导入与 Tongyi 语言模型交互的模块。
    # from langchain_community.chat_models.tongyi import ChatTongyi
    # 初始化 Tongyi 模型
    # llm = ChatTongyi(model="qwen-plus", api_key="sk-5c7a7ef8061047cf98ba253b2ae06b2b")

    # llm = ChatOpenAI(model="gpt-4o-mini")
    llm = ChatOpenAI(model="anthropic/claude-3-haiku")

    # test_solver(llm, prompt)

    graph = create_graph(llm, prompt)
    zero_shot_example(graph, input_states)
 