from langchain_core.prompts import ChatPromptTemplate

Planner_Init_State_Template = ChatPromptTemplate([
    ("system", "You are an expert software engineer specializing in Java JUnit unit testing. "
               "Your task is to generate unit test cases for given method under test./no_think")
])

Method_Analyzer_Template = ChatPromptTemplate([
    ("user", """
First, provide a structured summary of its behavior to facilitate test case generation and explain what the function does in a concise manner.

**Input Format:**
- **Method Under Test:**  
  {method_code}

**Output Format (strictly follow this structure):**
```
 <Summary of the method’s behavior>
```
Ensure that your response follows this structure exactly and does not contain extra text.
The summary should be enclosed in triple backticks (```<summary>```)./no_think
""")
])

Test_Points_Init_Template = ChatPromptTemplate([
    ("user", """
Next, generate test points to cover the method’s behavior. Test points are specific scenarios that test different aspects of the method’s functionality.

Each test point should be structured into the following three sections:

1. **Test Purpose**: Describe the purpose of this test case, specifying whether it targets normal behavior, boundary conditions, or exceptional cases.
2. **Input Type**: Specify the possible input types, including valid, invalid, and edge cases if applicable.
3. **Output Type**: Describe the expected output type and behavior under the given input conditions.

The response must be formatted in a structured, extractable format as follows:

```json
[
    {{
        "Test_Purpose": "<description of the test purpose>",
        "Input_Type": "<description of the input characteristics>",
        "Output_Type": "<description of the expected output>"
    }},
    {{
        "Test_Purpose": "<description of the test purpose>",
        "Input_Type": "<description of the input characteristics>",
        "Output_Type": "<description of the expected output>"
    }}
]
```

Ensure that:
- At least one test covers normal conditions.
- At least one test covers boundary conditions.
- At least one test covers an exceptional case (e.g., invalid input).
- The response strictly adheres to the JSON format to facilitate automated extraction.

Now, generate a set of test points based on the given function./no_think
""")
])

Test_Points_Review_Template = ChatPromptTemplate([
    ("user", "")
])

Generator_Init_State_Template = ChatPromptTemplate([
    ("system",
     "You are an expert in Java software development, specializing in Unit Testing and JUnit. "
     "Your task is to generate high-quality JUnit test cases for a given Java method. "
     "The generated test cases should follow best practices in unit testing, ensuring clarity, "
     "maintainability, and effectiveness in verifying the method's correctness. "
     "Your response should include only a properly formatted JUnit test method, "
     "enclosed within triple backticks (` ```java `), so it can be directly extracted and executed. "
     "If there are any dependencies or unclear contextual details required to generate the test case, "
     "use the available retrieval tools to fetch additional context./no_think"
     ),
    ("user",
     "Generate only one JUnit test case for the following method under test. "
     "The method is located in the `{class_name}` class, which belongs to the `{package_name}` package. "
     "The method signature is as follows:\n\n"
     "```java\n{method_signature}\n```\n\n"
     "**Summary of the Method:**\n"
     "{method_summary}\n\n"
     "### **Test Specification**\n"
     "{test_specification}\n\n"
     "### **Guidelines for Test Case Generation**\n"
     "1. Use **JUnit 4** for test case writing.\n"
     "2. The test case should be **self-contained** and **properly structured**.\n"
     "3. If dependencies or mocks are required, use **Mockito** for mocking.\n"
     "4. Ensure **assertions** properly verify the method's expected behavior.\n"
     "5. The test method should begin with `@Test` and be placed inside a suitable test class.\n"
     "6. If necessary, include **setup methods** using `@BeforeEach` for initialization.\n"
     "7. If any context dependencies are unclear, **use the retrieval tool** to fetch additional details before generating the test case.\n\n"
     "Please return only the runnable test case in the following format:\n\n"
     "```java\n"
     "// Your generated runnable JUnit test case here\n"
     "```\n"
     )
])

Generator_Init_State_with_Function_Information_Template = ChatPromptTemplate([
    ("system",
     "You are an expert in Java software development, specializing in Unit Testing and JUnit. "
     "Your task is to generate high-quality JUnit test cases for a given Java method. "
     "The generated test cases should follow best practices in unit testing, ensuring clarity, "
     "maintainability, and effectiveness in verifying the method's correctness. "
     "Your response should include only a properly formatted JUnit test method, "
     "enclosed within triple backticks (` ```java `), so it can be directly extracted and executed. "
     "If there are any dependencies or unclear contextual details required to generate the test case, "
     "you have access to the following tools to retrieve necessary information:\n\n"
     "{rendered_tools}\n\n"
     "When needed, call the appropriate tool(s) by returning a JSON object in the following format:\n\n"
     "```json\n"
     "{{\"name\": \"tool_name\", \"arguments\": {{ \"key\": \"value\" }} }}\n"
     "```\n"
     "If multiple tools are required, return a JSON array of such objects.\n\n"
     "Do not assume missing information—use the available tools to retrieve context whenever necessary./no_think"
     ),
    ("user",
     "Generate only one JUnit test case for the following method under test. "
     "The method is located in the `{class_name}` class, which belongs to the `{package_name}` package. "
     "The method signature is as follows:\n\n"
     "```java\n{method_signature}\n```\n\n"
     "**Summary of the Method:**\n"
     "{method_summary}\n\n"
     "### **Test Specification**\n"
     "{test_specification}\n\n"
     "### **Guidelines for Test Case Generation**\n"
     "1. Use **JUnit 4** for test case writing.\n"
     "2. The test case should be **self-contained** and **properly structured**.\n"
     "3. If dependencies or mocks are required, use **Mockito** for mocking.\n"
     "4. Ensure **assertions** properly verify the method's expected behavior.\n"
     "5. The test method should begin with `@Test` and be placed inside a suitable test class.\n"
     "6. If necessary, include **setup methods** using `@BeforeEach` for initialization.\n"
     "7. If any context dependencies are unclear, **use the retrieval tool** to fetch additional details before generating the test case.\n\n"
     "Please return only the runnable test case in the following format:\n\n"
     "```java\n"
     "// Your generated runnable JUnit test case here\n"
     "```\n"
     )
])

Generator_Update_Init_State_Template = ChatPromptTemplate([
    ("system",
     "You are an expert in Java software development, specializing in Unit Testing and JUnit testing. "
     "Your task is to improve existing JUnit test cases based on given feedback and specifications. "
     "You will receive a test case that needs improvement along with details of the Java method under test, "
     "including method signature, summary, and a detailed test specification. "
     "Your response must strictly be a complete and correctly formatted JUnit test method, enclosed within triple backticks (` ```java `), "
     "ready to be directly extracted and executed./no_think"
     ),
    ("user",
     """
You are tasked with improving an existing JUnit test case according to the provided information.

## Method Under Test
```java
{method_signature}
```

**Summary of the Method:**
{method_summary}

### **Existing Test Case**
```java
{test_case}
```

### **Test Specification (What to Improve)**
{test_specification}

### **Guidelines for Test Case Improvement**
1. Analyze the provided existing test case carefully.
2. Clearly address the provided improvement points from the specification.
3. Ensure correctness, clarity, and completeness.
4. Validate appropriate assertions, edge cases, and exception handling.
5. Adhere strictly to JUnit standards and best practices.
6. The improved test method must begin with `@Test`, be correctly structured, and directly executable.
7. Include necessary setup (`@BeforeEach`) methods if required for context initialization.

Your response should only include the fully improved and correctly formatted JUnit test case, enclosed within triple backticks (` ```java `).
"""
     )
])

Generator_Update_Init_State_with_Function_Information_Template = ChatPromptTemplate([
    ("system",
     "You are an expert in Java software development, specializing in Unit Testing and JUnit. "
     "Your task is to improve existing JUnit test cases based on provided feedback and specifications. "
     "You must ensure test cases follow best practices in unit testing, providing clarity, maintainability, and correctness. "
     "Your response should only include the improved and properly formatted JUnit test method enclosed within triple backticks (` ```java `), "
     "ready for direct extraction and execution. "
     "If you encounter missing dependencies or unclear contextual information required to improve the test case, "
     "you have access to the following retrieval tools:\n\n"
     "{rendered_tools}\n\n"
     "When needed, call the appropriate tool(s) by returning a JSON object in this format:\n\n"
     "```json\n"
     "{\"name\": \"tool_name\", \"arguments\": { \"key\": \"value\" }}\n"
     "```\n\n"
     "If multiple tools are needed, return a JSON array of such objects. "
     "Never assume missing information—always use the provided tools to fetch any additional necessary context./no_think"
    ),
    ("user",
     """You are required to improve the following existing JUnit test case for the given Java method. 

The method is located in the `{class_name}` class, belonging to the `{package_name}` package.

### Method Signature
```java
{method_signature}
```

**Summary of the Method:**
{method_summary}

### Existing Test Case
```java
{test_case}
```

### Test Specification (Areas to Improve)
{test_specification}

### Guidelines for Test Case Improvement
1. Use **JUnit 4** standards strictly.
2. Ensure the test case is **self-contained**, clearly structured, and easily maintainable.
3. Use **Mockito** for mocking dependencies if required.
4. Clearly verify the expected behavior with appropriate **assertions**.
5. Ensure the improved test method begins with `@Test` and resides within a suitable test class.
6. Include a setup method annotated with `@BeforeEach` if necessary for initializing contexts.
7. If context dependencies are unclear or missing, use the retrieval tools provided to gather required information.

Respond strictly with only the runnable improved test case formatted as follows:

```java
// Your improved runnable JUnit test case here
```
""")
])


Execution_Review_Template = ChatPromptTemplate([
    ("user", "The test case compiled successfully, but the execution result did not match the expected output."
             "This discrepancy could be caused by either of the following reasons:\n"
             "1. The test case itself is incorrect, meaning the expected output does not align with the actual correct behavior of the method.\n"
             "2. The method under test contains a bug that causes it to produce incorrect results.\n\n"
             "Your task is to analyze the situation and determine the most likely cause of this mismatch. "
             "You should carefully examine the test case, the test execution report, and the original test specification to provide an insightful review.\n\n"
             "- **Test Execution Report:**\n"
             "{execution_report}\n\n"
             "- **Original Test Specification:**\n"
             "{test_specification}\n\n"

             "### **Analysis Task**\n"
             "1. Verify whether the test case properly implements the test objective.\n"
             "2. Check if the expected result defined in the test case aligns with the correct behavior of the method.\n"
             "3. Analyze the execution report to identify potential sources of the discrepancy.\n"
             "4. If the test case is incorrect, suggest corrections.\n"
             "5. If the method under test appears to contain a bug, describe the possible issue and suggest debugging steps.\n\n"

             "### **Output Format**\n"
             "Your response must follow one of the two formats:\n\n"

             "#### If the issue is with the **test case**, respond with:\n"
             "```json\n"
             "{{\n"
             "  \"issue\": \"test_case_error\",\n"
             "  \"reason\": \"<explanation of why the test case is incorrect>\",\n"
             "  \"suggested_fix\": \"<corrected test case>\"\n"
             "}}\n"
             "```\n\n"

             "#### If the issue is with the **method under test**, respond with:\n"
             "```json\n"
             "{{\n"
             "  \"issue\": \"method_bug\",\n"
             "  \"reason\": \"<explanation of why the method likely contains a bug>\",\n"
             "  \"hypothesis\": \"<possible cause of the bug>\",\n"
             "  \"suggested_fix\": \"<potential fix for the method>\"\n"
             "}}\n"
             "```\n\n"

             "Ensure your response strictly follows the JSON format above so that it can be programmatically analyzed./no_think"
     )
])

Acceptance_Review_Template = ChatPromptTemplate([
    ("user", "{test_cases}"
     )
])

Final_Test_Case_Fix_Template = ChatPromptTemplate([
    ("user", "The test case you provided contains syntax errors that prevent it from compiling successfully. "
             "These errors could be due to incorrect Java syntax, missing imports, or other issues within the test case code. "
             "Your task is to identify and correct these errors to ensure that the test case compiles and executes./no_think"
     )
])

Evaluator_Init_State_Template = ChatPromptTemplate([
    ("system", "You are an expert evaluator specializing in Java JUnit unit testing. "
               "Your task is to evaluate the quality of the generated JUnit test cases for a given method under test./no_think")
])

Evaluator_Analyzer_Template = ChatPromptTemplate([
    ("user", """
     Based on the test point, test result, coverage data, mutation score, mutants information, and bug detection status, summarize clearly:

- The intended purpose of the test case.
- Whether the test case successfully achieves its intended purpose.
- How well the test case matches the original test points.

**Test Case Information:**
{test_case_info}

**Output Format (strictly follow this structure):**
```
     <Summary of the test case analysis>
```
Ensure that your response follows this structure exactly and does not contain extra text.
The summary should be enclosed in triple backticks (```<summary>```)./no_think
""")
])

Evaluation_Template = ChatPromptTemplate([
    ("user", """Given the following Java method under test and its corresponding test cases, evaluate each test case carefully and provide a structured evaluation.

# Method Under Test
{method_code}

start_line: {start_line}
end_line: {end_line}

method_summary: {method_summary}

# Generated Test Cases Summary
{test_cases_summary}

---

For each test case, you must:
- Provide an overall evaluation decision with the field `decision`, choosing from:
  - **keep**: Retain the test case as is.
  - **update**: Retain the test case but suggest specific improvements or modifications.
  - **discard**: Delete the test case entirely.

Your evaluation should include:
- A clear reason for your decision, based on:
  - Test case execution outcome (Passed, Compile Error, Runtime Error, Syntax Error)
  - Coverage achieved (percentage and specific lines covered)
  - Mutation testing effectiveness
  - Bugs detected
  - Completeness, correctness, and clarity of test logic

- If your decision is "update", clearly specify exactly what should be modified.

Additionally, if important scenarios or edge cases are missing or inadequately covered by the current tests, provide detailed recommendations for additional tests.

Respond strictly in the following JSON format:

```json
{{
  \"evaluations\": [
    {{
      \"test_case_id\": 1,
      \"decision\": \"keep\" | \"update\" | \"discard\",
      \"reason\": \"Detailed explanation for your decision.\",
      \"suggestion\": \"Specific suggestions for modifications if the decision is 'update'. Otherwise, leave empty.\"
    }},
    ...
  ],
  \"addition\": [
    {{
      \"description\": \"Detailed description of an additional test case you recommend adding.\",
      \"test_point\": \"Clearly describe what this test case should verify or handle.\"
    }},
    ...
  ]
}}
```

---

Now, carefully evaluate and respond./no_think""")
])
