SWE_BENCH_TEMPLATE_NO_HIS = """

"""
# The default settings are the bare minimum to run the agent. Take a look at the config files for improved settings.
SYSTEM_TEMPLATE: str = "You are a helpful assistant that can do anything."
instance_mem_template: str = """You are an AI assistant skilled in breaking down complex tasks and generating Shell commands. Please strictly follow the following steps and format requirements to process user tasks.​​

​​Task Context​​
​​User Task:​​ {{task}}

You are currently in round {{turn}}. 
The results of the previous round are: {{observation}}

​​Execution Memory:​​ {{memory}}

​​Processing Flow​​:
1.​​Think:​​ Analyze the current task objectives, available information (including previous outputs and Memory), and what needs to be done next. Write your thinking process within the <think>tags.
2.​​Action:​​ If the current step requires executing a Shell command, place ​​only​​ that command within triple backticks inside the <action>tags. The entire task is considered finally complete when the first line of the command's output is 'COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT'.
3.​​Update Memory:​​ Determine which key information needs to be retained for the next round (e.g., current progress, generated results, environment state). Summarize it using concise key-value pairs or natural language, and place it within the <memory>tags. ​​Note:​​ This is the only shared memory between you and the next interaction; please preserve core information.

​​Output Format:​​
Your response must be ​​strictly and solely​​ in the following format. Absolutely no other content or explanations are allowed:

<think>
Your thinking process... (e.g., what step the task has progressed to; the goal of the current step; why specific content in Memory is being updated; why this specific command is chosen)
</think>
<action>
```shell
The single shell command to execute currently
```
</action>
<memory>
Your memory should include the following sections:
Update Memory: Determine which key information needs to be retained for the next round. Summarize it using concise key-value pairs or natural language, and place it within the <memory>tags. Note: This is the only shared memory between you and the next interaction; please preserve core information. The memory should be structured to maintain a comprehensive world state for the task, including:

1. Primary Request and Intent: Capture the user's explicit requests and intents in detail, including any subtle nuances or changes in intent over rounds.
2. Key Technical Concepts: List important technical concepts, technologies, and frameworks, along with their relevance to the problem (e.g., why a specific syntax is invalid).
3. Files and Code Sections: Enumerate specific files and code sections examined, modified, or created. Include full code snippets if applicable, and note the purpose of each file in the context of the task.
4. Errors and Fixes: List all errors encountered, their root causes (e.g., why models.E015 occurs when using '__pk'), and how they were resolved or potential fixes based on domain knowledge (e.g., replacing '__pk' with '__id' in Django).
5. Problem Solving: Document problems solved, ongoing troubleshooting efforts, and any hypotheses or strategies being tested (e.g., "Hypothesis: The issue is in test files; strategy: prioritize modifying instances in test_app").
6. All User Messages: List all user messages verbatim to capture evolving context and feedback.
7. Pending Tasks: Outline pending tasks with specific criteria for completion (e.g., "Modify all occurrences of '__pk' in Meta.ordering across 3 files").
8. Current Work: Describe in detail what was being worked on, including progress metrics (e.g., "Scanned 50 files; found 5 instances of Meta.ordering").
9. Environment Context: Record key environment details such as codebase paths, version information (e.g., Django version from the regression commit), and assumptions.
10. Optional Next Step: Provide a concrete, actionable next step (e.g., "Run grep for '__pk' in the output of the previous search to identify exact lines for modification").

</memory>
​​Constraints​​
Even if the task seems simple, you ​​must​​ go through the thinking step.
The content in Memory should be ​​concise, structured, and useful​​ for subsequent steps.
Each response round can only generate ​​one​​ Shell command.
Only when the task is ​​completely solved​​ should the command output 'COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT' on its first line.
You cannot see the output of historical actions, so you need to record your attempted methods, progress, thoughts, planning, and other content each time you output memory at every step.
Critical: Do NOT attempt to reproduce or recreate the issue environment. Focus solely on using shell commands to directly analyze and modify code files to fix the problem. Assume you are working directly on the codebase without environment setup or reproduction steps.
"""

# add for steps reward enhancement
steps_template: str = """
<turn{{turn}}>
    <action>
    {{action}}
    </action>
    <result>
    {{result}}
    </result>
</turn{{turn}}>
"""
# 语法风格
steps_rm_prompt_template: str = """
你是一位经验丰富的软件工程师，擅长解决各类软件问题。你的任务是根据以下评分标准，对给定的软件问题解决过程进行整体评分和单步骤的评分：

【解决过程整体评分标准】
1、完整性维度
   1.1）解决过程需要包括问题诊断、代码修复、测试验证。
2、高效性维度
   2.1）无冗余、重复步骤。
3、精准性维度
   3.1）问题根因定位准确，修复代码可以彻底解决问题。
   3.2）修复代码符合业界代码规范、语法正确、可读性好、影响范围可控。

【解决过程整体评分方式】
1、完整性维度：满分5分；软件问题解决过程中每次出现一处对完整性维度评分标准的违反就扣除1分，直至0分。
2、高效性维度：满分5分；软件问题解决过程中每次出现一处对高效性维度评分标准的违反就扣除1分，直至0分。
3、精准性维度：满分5分；软件问题解决过程中每次出现一处对精准性维度评分标准的违反就扣除1分，直至0分。

【单步骤评分标准】
检查该步骤对于整体解决过程的必要性，是否是冗余、重复步骤。

【单步骤评分方式】
满分1分。如果该步骤违反了单步骤评分标准，则给0分。

【软件问题】
{{task}}

请阅读以下软件问题解决过程，然后按上述标准分别给出评分（0~5分，0分最低，5分最高），及评分依据。
【软件问题解决过程】
{{steps}}

【评分输出格式】
请以如下格式输出评分结果，不要添加额外内容：
【解决过程整体评分】
完整性维度5分，评分依据：
高效性维度5分，评分依据：
精准性维度5分，评分依据：
【单步骤评分】
<turn0>1分</turn0><turn1>0分</turn1><turn2>1分</turn2>
"""

memory_rm_prompt_template: str = """
你是一位经验丰富的软件工程师，擅长解决各类软件问题。现在有一个正在训练中的在真实命令行环境下通过多轮执行解决软件问题的模型，由于上下文窗口限制，该模型只能根据任务执行情况产生一段中间执行的记忆，后续通过该记忆完成新一轮规划和执行。你的任务是根据以下评分标准，对这段记忆进行评分：

【评分标准】
1、完整性维度：是否覆盖任务描述中所有要求的章节，且内容充分。
2、相关性维度：内容是否紧密围绕任务核心，无冗余或偏离信息。
3、清晰性维度：结构是否清晰易读，语音是否简洁明确。
4、行动导向维度：是否提供明确的待处理任务和下一步建议，支持直接规划。
5、一致性维度：是否与任务当前状态和历史上下文一致，无矛盾。

【评分方式】
每个维度满分5分，按评分标准分别给出0~5分的评分（0分最低，5分最高）。

【软件问题】
{{task}}

请阅读以下模型响应内容，然后按上述标准分别给出评分及评分依据。
【模型响应，包括了模型的思考过程、动作和记忆三部分内容】
{{memory}}

【评分输出格式】
请以如下格式输出评分结果，不要添加额外内容：
完整性维度5分，评分依据：
相关性维度5分，评分依据：
清晰性维度5分，评分依据：
行动导向维度5分，评分依据：
一致性维度5分，评分依据：
"""

# timeout_template: str = (
#     "The last command <command>{{action['action']}}</command> timed out and has been killed.\n"
#     "The output of the command was:\n <output>\n{{output}}\n</output>\n"
#     "Please try another command and make sure to avoid those requiring interactive input."
# )
# format_error_template: str = "Please always provide EXACTLY ONE action in triple backticks."
# action_observation_template: str = "Observation: {{output}}"
# step_limit: int = 0
# cost_limit: float = 3.0