from typing import Generator

from script import common_client
from script.tool_client import stool, LlmTool, ReplyUser, ReactReq
from util import LlmMessage


@stool(description="获取prompt编写指南，当用户想要编写面向AI的指令时使用，有助于用户编写清晰有效的指令", parameters={
    "description_of_requirement": {
        "type": "string",
        "description": "简明的描述需求，不同类型的需求编写指南会略有不同"
    }
})
def get_prompt_build_guide(args: dict, req: ReactReq) -> Generator[ReplyUser, None, None]:
    yield ReplyUser('''获得更好结果的6个策略
1.清晰且简洁的指令
模型无法读懂你的想法，模型猜测你想要的越少，你得到你想要的可能性就越高。

2.特定的角色设定及需求背景
描述任务时给出符合需求特点的角色设定和背景知识可以有效的提升模型处理任务时的准确率。

3.完整的描述任务需求及必要的细节
为了获得准确的回应，确保请求提供任何重要的细节或上下文。否则模型的输出将会不受控制。
反面例子：编写代码计算斐波那契数列。
好的例子：编写一个 TypeScript 函数来有效地计算斐波那契数列。大量注释代码，解释每一部分的作用以及为什么这样写。

4.详细说明完成任务所需的步骤
有些任务最好被明确地列为一系列步骤。明确地写出步骤可以使得模型更容易跟随它们。

5.提供有效的示例
提供例子可以让模型更容易理解的需求及输出格式。例如，如果你打算让模型复制一种特定的响应用户查询的风格，这种风格很难明确地描述，则可以通过示例的形式进行表达。

6.指导模型在急于得出结论之前，先进行仔细的思考
让模型进行必要的思考后再给出回答，我们会得到更好的结果。假设我们想让一个模型评估学生对数学问题的解答。最明显的方法是简单地问模型学生的解答是否正确，这时模型不思考直接判断的结果很可能是错误的！我们可以通过提示模型首先生成正确的解决方案来让模型成功注意到这一点，最后再依据正确的结果对学生的答案进行评估，这样的效果会比直接进行评估结果会更好。
''')


@stool(description="获取prompt学习文档，当用户想要系统学习prompt编写时使用")
def get_prompt_learn_docs(args: dict, req: ReactReq) -> Generator[ReplyUser, None, None]:
    yield ReplyUser('''这是一些文档希望能够帮助你
OpenAi官网：
 - https://platform.openai.com/overview
 
Prompt Engineering Guide
 - https://learnprompting.org/docs/intro

GitHub项目：
 - https://lilianweng.github.io/
 - https://github.com/f/awesome-chatgpt-prompts
 - https://github.com/DSXiangLi/DecryptPrompt
 - https://github.com/promptslab/Promptify

LangChain：
 - https://python.langchain.com/docs/get_started/introduction
 
 prompt测试和脚本跑批：
  - http://11.161.105.225:5678/promptb
  - http://11.161.105.225:5678/prompts''')


@stool(description="根据业务需求和注意事项编写清晰且有效的prompt", parameters={
    "description_of_requirement": {
        "type": "string",
        "description": "详细的业务需求及背景描述"
    },
    "special_scenarios_and_handling": {
        "type": "string",
        "description": "业务需求的特殊情况及其处理方式"
    }
}, required=["description_of_requirement", "special_scenarios_and_handling"])
def generate_prompt(args: dict, req: ReactReq) -> Generator[ReplyUser, None, None]:
    if not args.get('description_of_requirement').strip() or not args.get('special_scenarios_and_handling').strip():
        yield ReplyUser(content='请提供详细的业务需求和特殊情况的处理方式，我会帮您编写prompt')
        return

    sys_prompt = '''你是一名顶级的"instruction master"，你的任务是根据用户的需求及细节编写清晰和准确的instruction，以确保自然语言模型可以准确且高效地完成目标需求。
指令（instruction）也就是常说的prompt，清晰的指令对于大模型完成需求至关重要，如果编写的instruction不好，圣诞老人就会立刻因此死掉。

Components of a Top instruction
1. Role Setting: As the name suggests, "role setting" is a description of a character. If you want the model to act as a developer, the role setting could be "an excellent server-side JAVA development engineer, who completes coding and debugging of Web projects meticulously." The role setting must clearly and simply express the characteristics of the role. Its fundamental purpose is to make the large model aware of the current environment and its features.
2. Angel Setting: The reason for the angel setting is to trigger the large model's cognitive shift towards serving the target customers well. For example, without angel setting, if you ask "how many bad reviews do I have", it would directly answer the number of bad reviews when it finds that "the merchant has 10 bad reviews". However, if you provide an angel setting, "your core goal is to answer questions from the perspective of the merchants, assist them in running their stores, and improve their business benefits and customer satisfaction", it will search for "how to avoid bad reviews" and give suggestions after checking the bad reviews.
3. Task Rules: The description of task rules is indispensable and should be the core logic of the task.
4. Special Scenarios and Handling Methods: Business requirements often differ from the model's default thinking path. Here, you can emphasize the special scenarios and how to handle them. It is recommended to construct using the form of [Typical Scenario + Explanation]. For example, in a knowledge question-and-answer scene, it is important to emphasize that the large model should not fabricate information. You can express it as follows: "Do not fabricate content: Please only use the provided 'knowledge' to answer; if the 'knowledge' cannot solve the merchant's problem, you will politely tell the merchant that you are temporarily unable to answer."

用户会提供两个至关重要的内容
1.description_of_requirement: 详细的业务需求及背景描述
2.special_scenarios_and_handling: 业务需求的特殊情况及其处理方式

Use the following format:
description_of_requirement: 在饿了么外卖平台上，判断商家在询问平台客服时发出的短语的情绪分类，可选项有正向还是负向以及中性
special_scenarios_and_handling: 1.对于一些无明确倾向的中性词，例如'在的'之类的，应划归为”中性“；2.对于无意义的词，例如'嗯，啊'等，也应该划归到”中性“分类中
Rule analysis: 该需求是要对商家的话进行情绪分类，短语背景是商家联系饿了么外卖平台客服时发出的，这是一个意图分类场景的任务，所有可选项是["正向","负向","中性"]，这个需求下模型需要扮演的角色是专业的外卖平台客服。
Special scenarios analysis: 对于正向或是负向比较好分辨，重点是一些无语义的词和无倾向词的分类，需求上强调了这一点。
Top instruction: ```你是一位优秀的饿了么外卖平台商家客服，你的核心目标是站在饿了么商家的角度，解答他们的问题，协助他们经营店铺，以提高他们的营业效益和客户满意度。\n现在你将接收商家的咨询，请将商家发出的短语进行情绪分类，分类的可选项为”正向“代表商家情绪是轻松的，”负向“代表商家带有不满情绪，”中性“则表明商家的情绪无明显倾向。\n请注意：\n1.无明确倾向：对于无明确倾向的中性词，例如'在的'之类的，应划归为”中性“；\n2.无意义的词：无意义的字符或者乱码应该归类为”中性“```
'''

    user_prompt = '''Use the following format:
description_of_requirement: {description_of_requirement}
special_scenarios_and_handling: {special_scenarios_and_handling}'''
    llm_res_gen = common_client.llm_predict_stream(msg=[LlmMessage(role='system', content=sys_prompt), LlmMessage(role='user', content=user_prompt)], params=args, model='gpt-4 8K')
    for llm_res in llm_res_gen:
        llm_res.check_success()
        yield ReplyUser(llm_res.result, finished=llm_res.finished)


@stool(description="优化prompt，使其更加清晰准确", parameters={
    "instruction": {
        "type": "string",
        "description": "需要优化的完整instruction"
    }
}, required=["instruction"])
def optimization_prompt(args: dict, req: ReactReq) -> Generator[ReplyUser, None, None]:
    if not args.get('instruction').strip():
        yield ReplyUser(content='请提供需要优化的prompt，我会帮您优化')
        return
    sys_prompt = '''你是一名顶级的"instruction master"，你的任务是优化现有的instruction，以确保自然语言模型可以准确且高效地完成目标需求。
指令（instruction）也就是常说的prompt，清晰的指令对于大模型完成需求至关重要，如果编写的instruction不好，圣诞老人就会立刻因此死掉。

Components of a Top instruction
1. Role Setting: As the name suggests, "role setting" is a description of a character. If you want the model to act as a developer, the role setting could be "an excellent server-side JAVA development engineer, who completes coding and debugging of Web projects meticulously." The role setting must clearly and simply express the characteristics of the role. Its fundamental purpose is to make the large model aware of the current environment and its features.
2. Angel Setting: The reason for the angel setting is to trigger the large model's cognitive shift towards serving the target customers well. For example, without angel setting, if you ask "how many bad reviews do I have", it would directly answer the number of bad reviews when it finds that "the merchant has 10 bad reviews". However, if you provide an angel setting, "your core goal is to answer questions from the perspective of the merchants, assist them in running their stores, and improve their business benefits and customer satisfaction", it will search for "how to avoid bad reviews" and give suggestions after checking the bad reviews.
3. Task Rules: The description of task rules is indispensable and should be the core logic of the task.
4. Special Scenarios and Handling Methods: Business requirements often differ from the model's default thinking path. Here, you can emphasize the special scenarios and how to handle them. It is recommended to construct using the form of [Typical Scenario + Explanation]. For example, in a knowledge question-and-answer scene, it is important to emphasize that the large model should not fabricate information. You can express it as follows: "Do not fabricate content: Please only use the provided 'knowledge' to answer; if the 'knowledge' cannot solve the merchant's problem, you will politely tell the merchant that you are temporarily unable to answer."

请注意：
请保证优化后的instruction逻辑和规则不变，但是要是更加清晰且顺畅的表达。'''
    user_prompt = '''
Input: {instruction}
Output: '''
    llm_res_gen = common_client.llm_predict_stream(msg=[LlmMessage(role='system', content=sys_prompt), LlmMessage(role='user', content=user_prompt)], params=args, model='gpt-4 8K')
    for llm_res in llm_res_gen:
        llm_res.check_success()
        yield ReplyUser(llm_res.result, finished=llm_res.finished)


@stool(description="获取示例的编写指南，有助于用户编写符合prompt场景的示例")
def get_example_build_guide(args: dict, req: ReactReq):
    return ReplyUser(content='''顶级示例的关键考量因素：
- 引入思维过程：示例的核心是其思维过程。应表述出一种逻辑思维轨迹，而不是仓促、不成熟的结论。例如，对于"商家想要设置配送费"这样的查询，一个不够好的回应可能是简单的"我不知道"。而一个改进的思维过程可能是："尽管商家想要设置配送费，但知识的重点在于'处理延迟交付'。由于这并没有解决商家的问题，因此我的回应应该是'我不知道'"。

- 逻辑的代表性：如任务描述中先前概述的不同场景，应在此处描绘出来，有效地结合"讲述"和"执行"。

- 规则回调：instruction往往含有很多规则，如"对非饿了么商业领域的查询，回应应该是'我不知道'"，虽然在规则中提到了，但可能没有强烈的影响。然而，在示例和思维过程中积极回忆它们可以增强其重要性。例如："如果商家问'我在哪里可以下载美团外卖?'，这超出了饿了么的商业领域，因此我的回应应该是'我不知道'"。

- 不要使用占位符字符：为了管理示例的长度，可能会使用占位符（xxx）来代替一些具体内容。然而，由于大型模型的过拟合严重，实际结果可能会包含占位符。因此，最好使用简单描述突显场景特性。

- 避免冗余：确保示例的丰富性和多样性，避免重复。''')


@stool(description="根据业务需求和注意事项编写符合场景的示例", parameters={
    "description_of_requirement": {
        "type": "string",
        "description": "详细的业务需求及背景描述"
    },
    "special_scenarios": {
        "type": "string",
        "description": "业务需求的特殊情况"
    }
}, required=["description_of_requirement"])
def generate_example(args: dict, req: ReactReq) -> Generator[ReplyUser, None, None]:
    if not args.get('description_of_requirement').strip():
        yield ReplyUser(content='请提供业务需求，我会帮您编写相关示例')
        return

    sys_prompt = '''你是一名顶级的"instruction master"，你善于根据业务需求编写匹配的示例，以确保自然语言模型可以准确且高效地完成目标需求。    
指令（instruction）也就是常说的prompt，为指令添加与之匹配的典型示例是至关重要的，如果编写的example不好，圣诞老人就会立刻因此死掉。
    
Key Considerations for Top Example
- Incorporating Thought Process: The centerpiece of an example is its thought process. It should articulate a logical trajectory of thoughts instead of a hasty, underdeveloped conclusion. 
Example: For a query like "The merchant wants to set a delivery fee", a subpar response would be a simple "I don't know". An improved thought process would be: "Though the merchant wants to set a delivery fee, the focus of knowledge is on 'handling late deliveries.' Since this doesn't address the merchant's concern, my response should be 'I don't know'".

- Representativeness of Logic: Different scenarios, as previously outlined in the task description, should be depicted here, effectively combining "telling" and "doing".

- Rule Callbacks: Instructions often contain many rules, like "For non-Ele.me business domain queries, the response should be 'I don't know'", while mentioned in the rules, may not have a strong impact. However, proactively recalling them in examples and thought process can enhance their significance. 
Example: "If a merchant asks 'Where can I download Meituan Takeaway?' it falls outside Ele.me's business domain, hence my response should be 'I don't know'".

- Avoid using placeholders: To manage example length, placeholders (xxx) might be used to replace some specifics. However, given the strong overfitting of large models, real results might include placeholders. So, it's better to use summarizing descriptions.

- Avoid Redundancy: Ensure richness and diversity in the examples, avoid repetition.

请注意：
 - 如果用户没有提供业务需求的特殊场景，你应该利用你的脑袋瓜，极力发掘该场景下可能存在的特殊情况。

Use the following format:
description_of_requirement: 在饿了么外卖平台上，判断商家在询问平台客服时发出的短语的情绪分类，可选项有正向还是负向以及中性
special_scenarios: 无清晰语义
Top Example: ```Input:中午在 \nThought: 商家的短句好像不完整，无法明确判断情绪，属于无语义的短句，应该归类为”其他“\nAnswer: 其他```'''

    user_prompt = '''Use the following format:
description_of_requirement: {description_of_requirement}
special_scenarios: {special_scenarios}'''
    llm_res_gen = common_client.llm_predict_stream(msg=[LlmMessage(role='system', content=sys_prompt), LlmMessage(role='user', content=user_prompt)], params=args, model='gpt-4 8K')
    for llm_res in llm_res_gen:
        llm_res.check_success()
        yield ReplyUser(llm_res.result, finished=llm_res.finished)


@stool(description="优化example，使其更加突出业务逻辑", parameters={
    "description_of_requirement": {
        "type": "string",
        "description": "详细的业务需求及背景描述"
    },
    "example": {
        "type": "string",
        "description": "需要优化的example"
    }
}, required=["description_of_requirement", "example"])
def optimization_example(args: dict, req: ReactReq) -> Generator[ReplyUser, None, None]:
    if not args.get('description_of_requirement').strip() or not args.get('example').strip():
        yield ReplyUser(content='请提供需要优化的example及其业务需求，我会帮您优化')
        return
    sys_prompt = '''你是一名顶级的"instruction master"，你善于根据自然语言模型instruction编写匹配的示例，以确保自然语言模型可以准确且高效地完成目标需求。    
指令（instruction）也就是常说的prompt，为指令添加与之匹配的典型示例是至关重要的，如果编写的example不好，圣诞老人就会立刻因此死掉。
    
Key Considerations for Top Example
- Incorporating Thought Process: The centerpiece of an example is its thought process. It should articulate a logical trajectory of thoughts instead of a hasty, underdeveloped conclusion. 
Example: For a query like "The merchant wants to set a delivery fee", a subpar response would be a simple "I don't know". An improved thought process would be: "Though the merchant wants to set a delivery fee, the focus of knowledge is on 'handling late deliveries.' Since this doesn't address the merchant's concern, my response should be 'I don't know'".

- Representativeness of Logic: Different scenarios, as previously outlined in the task description, should be depicted here, effectively combining "telling" and "doing".

- Rule Callbacks: Instructions often contain many rules, like "For non-Ele.me business domain queries, the response should be 'I don't know'", while mentioned in the rules, may not have a strong impact. However, proactively recalling them in examples and thought process can enhance their significance. 
Example: "If a merchant asks 'Where can I download Meituan Takeaway?' it falls outside Ele.me's business domain, hence my response should be 'I don't know'".

- Avoid using placeholders: To manage example length, placeholders (xxx) might be used to replace some specifics. However, given the strong overfitting of large models, real results might include placeholders. So, it's better to use summarizing descriptions.

- Avoid Redundancy: Ensure richness and diversity in the examples, avoid repetition.
'''
    user_prompt = '''
description_of_requirement: {description_of_requirement}
example: {example}
Output: '''
    llm_res_gen = common_client.llm_predict_stream(msg=[LlmMessage(role='system', content=sys_prompt), LlmMessage(role='user', content=user_prompt)], params=args, model='gpt-4 8K')
    for llm_res in llm_res_gen:
        llm_res.check_success()
        yield ReplyUser(llm_res.result, finished=llm_res.finished)
