rep_prompt = """You are an expert in testing MCP (Model Context Protocol servers. Within the MCP tool ecosystem, we conduct two distinct types of evaluations to validate tool functionality and usability:

**1. Tool Evaluation:**  
For a given MCP tool and input, we check whether the tool's output adheres to a predefined set of validation rules. This test focuses on interface integrity and execution logic accuracy.

**2. Eval Evaluation:**  
Here, a natural language instruction (query) is sent to the MCP system, which then invokes the target tool. A large language model (LLM) assesses two critical criteria:  
 Whether the correct tool was called  
- Whether the tool's output satisfies the intent of the query

The primary objective of these tests is to diagnose issues across the MCP tool ecosystem by analyzing failed cases. Specifically, we aim to identify:

- Bugs or logical faults in the tool's implementation
- Ambiguities or shortcomings in the tool's description (e.g., unclear purpose, incomplete parameter guidelines)
- Practical recommendations to enhance tool performance or usability

**Test Failure Data Format:**  
Below, for the {{ tool_name }} tool under MCP Server {{ server_name }}, failed cases from both evaluation types are presented in standardized JSON for clarity.

**Field Explanations:**

- **Tool Evaluation Failed Case Fields**
  - `input`: Input given to the tool
  - `description`: Purpose and context for the test case
  - `expect`: Correct expected output
  - `env_script`: Environment configuration used for the test
  - `tool_output`: Actual result returned by the tool
  - `rule_not_passed`: List of validation rules the output failed

- **Eval Evaluation Failed Case Fields**
  - `query`: Natural language instruction provided to MCP
  - `eval_output`: Actual output from the invoked tool
  - `message`: LLM's reasoning explaining the test failure

**1. Tool Evaluation Failed Cases**
```json
[
{% for case in tool_failed_details %}
  {
    "input": "{{ case['input'] }}",
    "description": "{{ case['description'] }}",
    "expect": "{{ case['expect'] }}",
    "env_script": "{{ case['env_script'] }}",
    "tool_output": "{{ case['tool_output'] }}",
    "rule_not_passed": [
      {% for rule in case['rule_not_passed'] %}
        "{{ rule }}"
        {% if not loop.last %},{% endif %}
      {% endfor %}
    ]
  }
  {% if not loop.last %},{% endif %}
{% endfor %}
]
```

**2. Eval Evaluation Failed Cases**
```json
[
{% for case in eval_failed_details %}
  {
    "query": "{{ case['query'] }}",
    "eval_output": "{{ case['eval_output'] }}",
    "message": "{{ case['message'] }}"
  }
  {% if not loop.last %},{% endif %}
{% endfor %}
]
```

**3. Target MCP Tool**  
Tool Name: {{ tool_name }}
Description: {{ tool_description }}
Parameters: {input_properties}

The complete source code for the {{ tool_name }} tool, registered via the `mcp.tool()` decorator (standard integration mechanism), is provided below:

{{ tool_function }}

**Your Task:**  
Generate a concise, focused analysis of the failed test cases for the {{ tool_name }} tool. Follow these strict formatting and content requirements:  

### 1. Analysis of Tool Evaluation Failures  
Provide **a single paragraph** summarizing the root causes of all tool evaluation failures. Focus on patterns (e.g., "All failures stem from missing dependency checks in the source code") rather than listing individual cases.  

### 2. Analysis of Eval Evaluation Failures  
Provide **a single paragraph** summarizing the core issues in natural language handling or tool description that caused eval failures. Avoid bullet points; focus on overarching problems (e.g., "The tool description's ambiguity leads to incorrect LLM invocation").  

### 3. Specific Improvement Recommendations  
Provide **3-5 concrete, actionable fixes** with technical details:  
- For source code issues: Include **revised code snippets**.  
- For tool description issues: Provide **the exact revised description text**.  
- For validation rules or environment issues: Specify precise adjustments. 

All recommendations must directly address the root causes identified in the analysis sections. Keep explanations brief and technical, focusing on "how to fix" rather than general advice."""