import re
def prompt (des,topology,history,context):

    prefix = f'''
# System Description
   The services in the current system include: adservice, cartservice, checkoutservice, currencyservice, emailservice, frontend, paymentservice, productcatalogservice, recommendationservice, shippingervice
   The descriptive relationships between each service are as follows:

```mermaid
graph TD
    User((User)) --> frontend[frontend]
    frontend --> ad[ad]
    frontend --> recommendation[recommendation]
    frontend --> productcatalog[productcatalog]
    frontend --> cart[cart]
    frontend --> checkout[checkout]
    recommendation --> productcatalog
    ad --> tidb[tidb]
    productcatalog -->tidb
    tidb --> tidbtidb[tidb-tidb]
    tidb --> tidbtikv[tidb-tikv]
    tidb --> tidbpd[tidb-pd]
    cart --> RedisCache[(Redis cache)]
    checkout --> payment[payment]
    checkout --> email[email]
    checkout --> shipping[shipping]
    checkout --> currency[currency]
    checkout --> productcatalog
    frontend --> shipping
    frontend --> currency
```

   The mapping relationship between services and nodes in the current system is:

   {topology}
   In addition, the current nodes also include k8s-master1, k8s-master2, and k8s-master3, which serve as Kubernetes management nodes.


# Basic Processing Logic

## The basic processing logic should be:
1.  Check the overall logs to analyze if there is any error service information, then check the trace logs. Based on the call error information provided by the traces, inspect the corresponding services, pods, and other information.
2.  If no relevant logs are found in both logs and traces, you need to check each service for abnormalities **in sequence** according to the call relationship.
3.  If no abnormalities are found in the services, check the Pod and node information in sequence to see if any abnormalities exist.

## Check layer by layer:
1.  First, check the Service layer. By verifying configurations, endpoint status, and forwarding rules, confirm if the request entry is normal and lock in whether it is a service routing or load balancing issue.
2.  Next, focus on the Pod layer. Check running status, logs, and health probes, combined with scheduling event analysis, to locate whether it is a container application failure or a resource configuration issue.
3.  Finally, investigate the Node layer. By monitoring node resources, system status, and runtime logs, confirm if the upper-level problems are caused by node abnormalities (such as resource exhaustion, network failures).
4.  Please note that **resource utilization metrics are only likely to be considered abnormal if they are > 70%, otherwise they are not considered significant evidence.**
5.  If there is no significant fault location or evidence in the current service call path, then start checking the next call path from the beginning.

## Data Description
1.  The statistical data for each indicator is presented in markdown table format (only indicators with significant differences are provided; those not listed are assumed to have small differences).
2.  Statistical data includes indicators before the fault (before), during the fault (abnormal), and after the fault (after).

# Defined Methods are as follows
  LogSearch(): Check the overall service log situation. This log should be the total log information.
  TraceAnalysis(): Check the overall call chain situation, which can uncover potential log problems, etc.
  LoadMetrics_allservice(): Check performance metrics for all services, including error counts, rrt duration, etc.
  LoadMetrics_allnode(): Check for abnormalities in all nodes, including resource utilization, etc.
  LoadMetrics_node(node): View the corresponding resource utilization information based on the node. Call this function when the abnormal node is clear. The corresponding node name needs to be provided when giving this function.
  LoadMetrics_full_service(service): Check for errors in a certain service, including the corresponding service's error count, rrt duration, etc. The corresponding service name needs to be provided when giving this function.
  LoadMetrics_service_pod(service): View the corresponding pod information based on the service. Call this function when the abnormal service is clear to clarify the error count, rrt duration, etc., of the corresponding pods. It is usually used to check the various pods under a service after determining that a certain service has an abnormality. Example: LoadMetrics_service_pod(cartservice)
  LoadMetrics_pod(): View the corresponding resource utilization information based on the pod. This function is usually called when the abnormal pod is clear. The corresponding pod name needs to be provided when giving this function.
  Loadtidb(): Check tidb logs.
# Problem Description
  The problem given by the user is:
  {des}
# Historical Interaction Information
    '''

    suffix = '''

# Task Description
  First, summarize the current content. Please note that:
  - The summary needs to completely retain abnormal key service or node information, such as service names (e.g., "checkoutservice"), pod names (e.g., "frontend-0"), or node names (e.g., "node-3") at different levels.
  - The summary needs to completely retain abnormal key indicator information, including found error information or field names from the provided markdown indicator statistics tables, such as "node_network_transmit_bytes_total", "node_network_transmit_packets_total", etc.
  - The content of the summary should be "{The node or service}'s {indicator name and corresponding column name} is abnormal, specifically reflected in {reason}", or "{The node or service} is abnormal".
  - Strictly summarize according to the feedback content and historical interaction information. **It is forbidden to include content not in the context, and it is forbidden to fabricate any indicators.**

  
  And please judge, if you can locate the root cause (you must check layer by layer according to service->pod->infra), until the root cause description is given, give the result according to the following instructions and attach a brief explanation:
  - If you need to call a method (note, no need to reason for the cause, no more than 3, and methods that have been called in history cannot be repeated), then give:
    METHOD: Name of the method to be called
  - If you can judge that there is a new potentially faulty service, give it in the following format:
    SERVICE:Faulty service:Reason for the fault
  - If you can judge that there is a new potentially faulty node, give it in the following format:
    NODE:Faulty node:Reason for the fault
  - After querying the service, pod, and resource layer reasons, if you are sure that you can locate the root cause, please ensure the integrity of the location and give the following mark. Otherwise, you are prohibited from outputting any content containing GOCHA:
    GOCHA:Root Cause
  
  !!!! Wait, before giving the GOCHA root cause result, reflect on whether all possible abnormal services, pods, and nodes have been checked? Is the current root cause the real root cause?
      '''
    
    if history == '':
        return [prefix , context, suffix]
    else:
        return [history, context, suffix]

def gen_result(history,uuid):
    return '''
# Task Instructions
Based on the analysis log of the fault, reintegrate the fault location path and generate a json string that meets the requirements according to the format requirements.

# Format Requirements and Examples
## Format Requirements
The field descriptions of the json string are as follows:

| Field Name     | Type     | Required | Description                                                  |
| :------------- | :------- | :------- | :----------------------------------------------------------- |
| uuid           | string   | Yes      | The uuid of the fault case corresponding to this result.     |
| component      | string   | Yes      | The name of the **root cause** component. Each sample only evaluates one root cause component. If multiple components are submitted, only the first component field that appears in the JSON will be evaluated. The type must be string. |
| reason         | string   | Yes      | The cause or type of the fault, expressed as concisely as possible. If it involves key metric names, they must be completely consistent with the column names in the markdown table given in the interaction record. For example, cpu_usage_rate cannot be written as cpu usage rate, and the case must also be completely consistent! If it involves logs, it must also be completely consistent with the log keywords! |
| reasoning_trace | object[] | Yes      | The complete reasoning trajectory, including each step's action/observation, etc., expressed as concisely as possible. |


### reasoning_trace Field Description
#### Basic Format
    "reasoning_trace" is an array containing multiple step objects, each of which should include the following fields:
    step: integer, representing the reasoning step number (starting from 1);
    action: string, describing the call or operation of this step;
    observation: string, describing the result observed in this step, expressed as concisely as possible.
    It is recommended to use snake_case naming style for all field names to avoid mixed case.

#### Key Steps
    **reasoning_trace only gives key reasoning steps related to the fault that can reflect abnormalities.**

#### Key Information
    In the reasoning steps, key positioning information needs to be reflected. Specifically, each fault's label contains three types of key information:
    -   metric key information: includes some key metric names, which must be completely consistent with the column names in the markdown table given in the interaction record. For example, cpu_usage_rate cannot be written as cpu usage rate, and the case must also be completely consistent!
    -   log key information: includes log retrieval behavior and some key information from the logs.
    -   trace key information: includes key fault instances in the call chain, such as checkoutservice.

### Field Value Constraints
    The component field can be service, pod, node, etc. The names of service, pod, and node are restricted to ensure component accuracy:
    -   service names can only be taken from the following range: "adservice","cartservice","currencyservice","productcatalogservice","checkoutservice","recommendationservice","shippingservice","emailservice","paymentservice","tidb-pd","tidb-tidb","tidb-tikv"
    -   node names can only be taken from the following range: "aiops-k8s-01","aiops-k8s-02","aiops-k8s-03","aiops-k8s-04","aiops-k8s-05","aiops-k8s-06","aiops-k8s-07","aiops-k8s-08","k8s-master1","k8s-master2","k8s-master3"
    -   pod names can only be taken from the following range: "adservice-0","adservice-1","adservice-2","cartservice-0","cartservice-1","cartservice-2","currencyservice-0","currencyservice-1","currencyservice-2","productcatalogservice-0","productcatalogservice-1","productcatalogservice-2","checkoutservice-0","checkoutservice-1","checkoutservice-2","recommendationservice-0","recommendationservice-1","recommendationservice-2","shippingservice-0","shippingservice-1","shippingservice-2","emailservice-0","emailservice-1","emailservice-2","paymentservice-0","paymentservice-1","paymentservice-2","tidb-pd-0","tidb-tidb-0","tidb-tikv-0"

## Example
   The example is as follows:
   ```json
    {
      "uuid": "33c11d00-2",
      "component": "checkoutservice",
      "reason": "disk IO overload",
      "reasoning_trace": [
        {
          "step": 1,
          "action": "LoadMetrics(checkoutservice)",
          "observation": "disk_read_latency spike"
        },
        {
          "step": 2,
          "action": "TraceAnalysis('frontend -> checkoutservice')",
          "observation": "checkoutservice self-loop spans"
        },
        {
          "step": 3,
          "action": "LogSearch(checkoutservice)",
          "observation": "IOError in 3 logs"
        }
      ]
    }
   ```

# Generation Process:
  1. First, analyze the possible root cause components, reasons, and key steps based on the given system topology diagram, service call relationships, and analysis history.
  2. Generate the json string according to the requirements. It should be noted that it needs to be generated in the format of ```json... and the generated content needs to be parsable by json.loads().

# Analysis Log
  The following is the fault location history trajectory\n'''+f'''{re.sub(r'(^#)|( +#)','##',history)}

# uuid
The uuid is:
{uuid}

Please generate the final json string as required:
    '''


def describe_table(table,threshold='', graph = False):

    return f'''
#Task
  Please summarize each column of indicators in the following table, reflecting the changes of each indicator before, during, and after the abnormality occurred.
# Data Description
  1. The statistical data for each indicator is presented in markdown table format (only indicators with significant differences are provided; those not listed are assumed to have small differences).
  2. Statistical data includes indicators before the fault (before), during the fault (abnormal), and after the fault (after).
  3. Please note that according to the specific meaning of the indicator, based on the relative changes before, during, and after the abnormality, only give indicators with obvious changes and that reflect system abnormalities.
  4. Only give a summary of significantly abnormal indicators, no need for abnormality analysis, summary, or next-step plans, etc.

  {'## The reference threshold for abnormal values of resource utilization is:' if threshold != '' else ''}
  {threshold}

  {'## The topology of the current system is as follows:' + GRAPH if graph == True else ''}

 Please summarize the following table:
  {table}
'''
