import os
import re
import json
from datetime import datetime
from loguru import logger
import xml.etree.ElementTree as ET

def think(agent, user_command: str, chat_history: str, memory_context: str, current_situation: dict, iteration_count: int) -> str:
    """
    The core thinking process of the RootAgent. It decides the next action based on the user command,
    chat history, memory, and current situation.
    Returns:
        A string representing the next action:
        - Direct reply: "REPLY: <your reply>"
        - Clarifying questions: "CLARIFY: <your questions>"
        - Tool call(s): "<tool_calls><task>...</task><task>...</task></tool_calls>"
        - Detailed plan: "<plan><task>...</task><task>...</task></plan>"
    """
    current_date = datetime.now().strftime("%Y年%m月%d日 %H:%M:%S")
    current_working_dir = os.getcwd()
    tool_descriptions = agent.tool_manager.get_tool_descriptions()

    prompt = f"""Given the user's command, chat history, relevant memories, and current situation, determine the best course of action.

Your response should always start with a <think> tag explaining your reasoning. After the <think> tag, provide one of the following XML structures:

1.  **Direct Reply**: If the user's command is a simple question that can be answered directly, or a simple command that can be executed with a single tool call and its result can be directly returned.
    Format: `<reply>Your direct answer or the result of the simple tool call</reply>`

2.  **Clarifying Questions**: If the user's command is unclear or lacks sufficient information to proceed.
    Format: `<clarify>Your clarifying questions</clarify>`

3.  **Tool Call(s)**: If the command requires one or more direct tool calls to gather information or perform an action, and the result of these calls will directly lead to a final answer or a clear next step. You can include multiple tool calls within a <tool_calls> tag.
    Format: `<tool_calls><task><tool_name>...</tool_name><params>...</params><expect>...</expect><dependencies>...</dependencies><time>...</time><failure_policy>...</failure_policy></task>...</tool_calls>`

4.  **Detailed Plan (`all_plan`)**: If the command is complex and requires multiple steps, a structured approach, or involves significant problem-solving, and you estimate it will take more than 15 steps (tool calls or memory retrievals) to complete. This plan should be a sequence of <task> elements.
    Format: `<all_plan><task><tool_name>...</tool_name><params>...</params><expect>...</expect><dependencies>...</dependencies><time>...</time><failure_policy>...</failure_policy></task>...</all_plan>`

Current Date and Time: {current_date}
Operating System: {agent.os_info}
Current Working Directory: {current_working_dir}
Iteration Count: {iteration_count}

User Command: {user_command}
Chat History: {chat_history}
Memory Context: {memory_context}
File Context: {current_situation.get("file_context", "")}

Available tools:
{tool_descriptions}

Consider the following:
- **First Step**: If the command is a simple factual question, provide the answer directly using `<reply>`. If the command is ambiguous, ask clarifying questions using `<clarify>`.
- **Iterative Execution**: For tasks that are not simple enough for a direct reply or clarification, perform memory retrieval and tool calls using `<tool_calls>`. Continue this iterative process until a final answer can be provided or a detailed plan is necessary.
- **Complex Task Handling**: 
    - If `Iteration Count` is less than 3, **DO NOT** generate an `<all_plan>`. Instead, focus on generating `<tool_calls>` to gather more information or perform initial actions, even if the task seems complex. The goal is to explore and gather more context in the initial iterations.
    - If `Iteration Count` is 3 or greater, and you determine the task is complex and will require more than 15 steps of tool calls or memory retrievals, generate a detailed plan using `<all_plan>`. Once an `<all_plan>` is generated, the system will execute it to completion.

Example of Direct Reply for a simple question:
<think>用户询问当前日期，这是一个简单的问题，可以直接回答。</think><reply>The current date is {current_date}.</reply>

Example of Clarifying Questions:
<think>用户的问题不明确，需要更多信息才能执行。</think><clarify>Could you please specify which directory you want me to list files from?</clarify>

Example of Tool Call(s) for iterative execution:
<think>用户要求列出文件，我将使用list_directory工具获取文件列表。</think><tool_calls>
    <task><tool_name>list_directory</tool_name><params><path>./</path></params><expect>A list of files and directories.</expect><dependencies></dependencies><time>1</time><failure_policy>retry 3 times</failure_policy></task>
</tool_calls>

Example of Memory Retrieval:
<think>我需要检索关于用户偏好的记忆来更好地理解任务。</think><memory><search_1>用户偏好</search_1><search_2>项目A相关信息</search_2></memory>

note：time is int ,for second.

Example of Detailed Plan (`all_plan`) for a complex task (only allowed from iteration 3 onwards):
<think>用户要求实现一个新功能，这需要多个步骤，预计超过15步，且当前迭代次数已达到允许生成all_plan的条件，因此需要一个详细的计划。</think><all_plan>
    <task><tool_name>search_file_content</tool_name><params><pattern>authentication</pattern><include>*.py</include></params><expect>Relevant files for authentication module.</expect><dependencies></dependencies><time>5</time><failure_policy>retry 3 times</failure_policy></task>
    <task><tool_name>read_file</tool_name><params><absolute_path>/path/to/auth.py</absolute_path></params><expect>Content of auth.py.</expect><dependencies>search_file_content</dependencies><time>2</time><failure_policy>retry 3 times</failure_policy></task>
    <task><tool_name>write_file</tool_name><params><file_path>/path/to/new_auth.py</file_path><content># New authentication logic</content></params><expect>New auth file created.</expect><dependencies>read_file</dependencies><time>10</time><failure_policy>retry 3 times</failure_policy></task>
</all_plan>

"""
    llm_response_data = agent.llm.generate_content(prompt, task_type="thinking")
    llm_response = llm_response_data["output_content"]
    agent.llm_usage_metrics["total_input_tokens"] += llm_response_data["input_tokens"]
    agent.llm_usage_metrics["total_output_tokens"] += llm_response_data["output_tokens"]
    agent.llm_usage_metrics["total_cost"] += llm_response_data["cost"]
    agent.llm_usage_metrics["task_llm_metrics"].append({
        "task_type": "thinking",
        "input_tokens": llm_response_data["input_tokens"],
        "output_tokens": llm_response_data["output_tokens"],
        "cost": llm_response_data["cost"]
    })
    logger.debug(f"LLM response for thinking: {llm_response}")
    return llm_response
