谦言 commited on
Commit
98e07ff
1 Parent(s): 7218539
README.md CHANGED
@@ -1,11 +1,60 @@
1
  ---
2
- title: AgentScope
3
- emoji: 🦀
4
- colorFrom: purple
5
- colorTo: green
6
- sdk: docker
7
- pinned: false
8
- license: apache-2.0
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ -title: AgentScope
3
+ -emoji: 🦀
4
+ -colorFrom: purple
5
+ -colorTo: green
6
+ -sdk: docker
7
+ -pinned: false
8
+ -license: apache-2.0
9
  ---
10
 
11
+ <h1> Modelscope AgentFabric: Customizable AI-Agents For All</h1>
12
+
13
+ <p align="center">
14
+ <br>
15
+ <img src="https://modelscope.oss-cn-beijing.aliyuncs.com/modelscope.gif" width="400"/>
16
+ <br>
17
+ <p>
18
+
19
+ ## Introduction
20
+ **ModelScope AgentFabric** is an interactive framework to facilitate creation of agents tailored to various real-world applications. AgentFabric is built around pluggable and customizable LLMs, and enhance capabilities of instrcution following, extra knowledge retrieval and leveraging external tools. The AgentFabric is woven with interfaces including:
21
+ - ⚡ **Agent Builder**: an automatic instructions and tools provider for customizing user's agents through natural conversational interactions.
22
+ - ⚡ **User Agent**: a customized agent for building real-world applications, with instructions, extra-knowledge and tools provided by builder agent and/or user inputs.
23
+ - ⚡ **Configuration Tooling**: the interface to customize user agent configurations. Allows real-time preview of agent behavior as new confiugrations are updated.
24
+
25
+ 🔗 We currently leverage AgentFabric to build various agents around [Qwen2.0 LLM API](https://help.aliyun.com/zh/dashscope/developer-reference/api-details) available via DashScope. We are also actively exploring
26
+ other options to incorporate (and compare) more LLMs via API, as well as via native ModelScope models.
27
+
28
+
29
+ ## Installation
30
+ Simply clone the repo and install dependency.
31
+ ```bash
32
+ git clone https://github.com/modelscope/modelscope-agent.git
33
+ cd modelscope-agent && pip install -r requirements.txt && pip install -r demo/agentfabric/requirements.txt
34
+ ```
35
+
36
+ ## Prerequisites
37
+
38
+ - Python 3.10
39
+ - Accessibility to LLM API service such as [DashScope](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) (free to start).
40
+
41
+ ## Usage
42
+
43
+ ```bash
44
+ export PYTHONPATH=$PYTHONPATH:/path/to/your/modelscope-agent
45
+ export DASHSCOPE_API_KEY=your_api_key
46
+ cd modelscope-agent/demo/agentfabric
47
+ python app.py
48
+ ```
49
+
50
+ ## 🚀 Roadmap
51
+ - [x] Allow customizable agent-building via configurations.
52
+ - [x] Agent-building through interactive conversations with LLMs.
53
+ - [x] Support multi-user preview on ModelScope space. [link](https://modelscope.cn/studios/wenmengzhou/AgentFabric/summary) [PR #98](https://github.com/modelscope/modelscope-agent/pull/98)
54
+ - [x] Optimize knowledge retrival. [PR #105](https://github.com/modelscope/modelscope-agent/pull/105) [PR #107](https://github.com/modelscope/modelscope-agent/pull/107) [PR #109](https://github.com/modelscope/modelscope-agent/pull/109)
55
+ - [x] Allow publication and sharing of agent. [PR #111](https://github.com/modelscope/modelscope-agent/pull/111)
56
+ - [ ] Support more pluggable LLMs via API or ModelScope interface.
57
+ - [ ] Improve long context via memory.
58
+ - [ ] Improve logging and profiling.
59
+ - [ ] Fine-tuning for specific agent.
60
+ - [ ] Evaluation for agents in different scenarios.
README_CN.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ <h1> Modelscope AgentFabric: 开放可定制的AI智能体构建框架</h1>
3
+
4
+ <p align="center">
5
+ <br>
6
+ <img src="https://modelscope.oss-cn-beijing.aliyuncs.com/modelscope.gif" width="400"/>
7
+ <br>
8
+ <p>
9
+
10
+ ## 介绍
11
+
12
+ **Modelscope AgentFabric**是一个交互式智能体框架,用于方便地创建针对各种现实应用量身定制智能体。AgentFabric围绕可插拔和可定制的LLM构建,并增强了指令执行、额外知识检索和利用外部工具的能力。AgentFabric提供的交互界面包括:
13
+ - **⚡ 智能体构建器**:一个自动指令和工具提供者,通过与用户聊天来定制用户的智能体
14
+ - **⚡ 用户智能体**:一个为用户的实际应用定制的智能体,提供构建智能体或用户输入的指令、额外知识和工具
15
+ - **⚡ 配置设置工具**:支持用户定制用户智能体的配置,并实时预览用户智能体的性能
16
+
17
+ 🔗 我们目前围绕DashScope提供的 [Qwen2.0 LLM API](https://help.aliyun.com/zh/dashscope/developer-reference/api-details) 来在AgentFabric上构建不同的智能体应用。同时我们正在积极探索,通过API或者ModelScope原生模型等方式,引入不同的举办强大基础能力的LLMs,来构建丰富多样的Agents。
18
+
19
+ ## 安装
20
+
21
+ 克隆仓库并安装依赖:
22
+
23
+ ```bash
24
+ git clone https://github.com/modelscope/modelscope-agent.git
25
+ cd modelscope-agent && pip install -r requirements.txt && pip install -r demo/agentfabric/requirements.txt
26
+ ```
27
+
28
+ ## 前提条件
29
+
30
+ - Python 3.10
31
+ - 获取使用Qwen 2.0模型所需的API-key,可从[DashScope](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key)免费开通和获取。
32
+
33
+ ## 使用方法
34
+
35
+ ```bash
36
+ export PYTHONPATH=$PYTHONPATH:/path/to/your/modelscope-agent
37
+ export DASHSCOPE_API_KEY=your_api_key
38
+ cd modelscope-agent/demo/agentfabric
39
+ python app.py
40
+ ```
41
+
42
+ ## 🚀 发展路线规划
43
+ - [x] 支持人工配置构建智能体
44
+ - [x] 基于LLM对话构建智能体
45
+ - [x] 支持在ModelScope创空间上使用 [link](https://modelscope.cn/studios/wenmengzhou/AgentFabric/summary) [PR #98](https://github.com/modelscope/modelscope-agent/pull/98)
46
+ - [x] 知识库检索效果优化 [PR #105](https://github.com/modelscope/modelscope-agent/pull/105) [PR #107](https://github.com/modelscope/modelscope-agent/pull/107) [PR #109](https://github.com/modelscope/modelscope-agent/pull/109)
47
+ - [x] 支持智能体发布和分享
48
+ - [ ] 支持其他多种LLM模型API和ModelScope模型
49
+ - [ ] 处理长文本输入到内存
50
+ - [ ] 生产级支持:日志和性能分析
51
+ - [ ] 支持智能体微调
52
+ - [ ] 在不同场景中智能体的效果评估
__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from .builder_prompt import BuilderPromptGenerator
2
+ from .builder_prompt_zh import ZhBuilderPromptGenerator
3
+ from .custom_prompt import CustomPromptGenerator
4
+ from .custom_prompt_zh import ZhCustomPromptGenerator
app.py ADDED
@@ -0,0 +1,737 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib
2
+ import os
3
+ import random
4
+ import shutil
5
+ import traceback
6
+
7
+ import gradio as gr
8
+ import json
9
+ import yaml
10
+ from builder_core import beauty_output, init_builder_chatbot_agent
11
+ from config_utils import (DEFAULT_AGENT_DIR, Config, get_avatar_image,
12
+ get_ci_dir, get_user_cfg_file, get_user_dir,
13
+ is_valid_plugin_configuration, parse_configuration,
14
+ save_avatar_image, save_builder_configuration,
15
+ save_plugin_configuration)
16
+ from gradio_utils import ChatBot, format_cover_html, format_goto_publish_html
17
+ from i18n import I18n
18
+ from modelscope_agent.utils.logger import agent_logger as logger
19
+ from publish_util import (pop_user_info_from_config, prepare_agent_zip,
20
+ reload_agent_zip)
21
+ from user_core import init_user_chatbot_agent
22
+
23
+
24
+ def init_user(uuid_str, state):
25
+ try:
26
+ seed = state.get('session_seed', random.randint(0, 1000000000))
27
+ user_agent = init_user_chatbot_agent(uuid_str)
28
+ user_agent.seed = seed
29
+ state['user_agent'] = user_agent
30
+ except Exception as e:
31
+ logger.error(
32
+ uuid=uuid_str,
33
+ error=str(e),
34
+ content={'error_traceback': traceback.format_exc()})
35
+ return state
36
+
37
+
38
+ def init_builder(uuid_str, state):
39
+ try:
40
+ builder_agent = init_builder_chatbot_agent(uuid_str)
41
+ state['builder_agent'] = builder_agent
42
+ except Exception as e:
43
+ logger.error(
44
+ uuid=uuid_str,
45
+ error=str(e),
46
+ content={'error_traceback': traceback.format_exc()})
47
+ return state
48
+
49
+
50
+ def update_builder(uuid_str, state):
51
+
52
+ try:
53
+ builder_agent = init_builder_chatbot_agent(uuid_str)
54
+ state['builder_agent'] = builder_agent
55
+ except Exception as e:
56
+ logger.error(
57
+ uuid=uuid_str,
58
+ error=str(e),
59
+ content={'error_traceback': traceback.format_exc()})
60
+
61
+ return state
62
+
63
+
64
+ def check_uuid(uuid_str):
65
+ if not uuid_str or uuid_str == '':
66
+ if os.getenv('MODELSCOPE_ENVIRONMENT') == 'studio':
67
+ raise gr.Error('请登陆后使用! (Please login first)')
68
+ else:
69
+ uuid_str = 'local_user'
70
+ return uuid_str
71
+
72
+
73
+ def process_configuration(uuid_str, bot_avatar, name, description,
74
+ instructions, model, agent_language, suggestions,
75
+ knowledge_files, capabilities_checkboxes,
76
+ openapi_schema, openapi_auth, openapi_auth_apikey,
77
+ openapi_auth_apikey_type, openapi_privacy_policy,
78
+ state):
79
+ uuid_str = check_uuid(uuid_str)
80
+ tool_cfg = state['tool_cfg']
81
+ capabilities = state['capabilities']
82
+ bot_avatar, bot_avatar_path = save_avatar_image(bot_avatar, uuid_str)
83
+ suggestions_filtered = [row for row in suggestions if row[0]]
84
+ if len(suggestions_filtered) == 0:
85
+ suggestions_filtered == [['']]
86
+ user_dir = get_user_dir(uuid_str)
87
+ if knowledge_files is not None:
88
+ new_knowledge_files = [
89
+ os.path.join(user_dir, os.path.basename((f.name)))
90
+ for f in knowledge_files
91
+ ]
92
+ for src_file, dst_file in zip(knowledge_files, new_knowledge_files):
93
+ if not os.path.exists(dst_file):
94
+ shutil.copy(src_file.name, dst_file)
95
+ else:
96
+ new_knowledge_files = []
97
+
98
+ builder_cfg = {
99
+ 'name': name,
100
+ 'avatar': bot_avatar,
101
+ 'description': description,
102
+ 'instruction': instructions,
103
+ 'prompt_recommend': [row[0] for row in suggestions_filtered],
104
+ 'knowledge': new_knowledge_files,
105
+ 'tools': {
106
+ capability: dict(
107
+ name=tool_cfg[capability]['name'],
108
+ is_active=tool_cfg[capability]['is_active'],
109
+ use=True if capability in capabilities_checkboxes else False)
110
+ for capability in map(lambda item: item[1], capabilities)
111
+ },
112
+ 'model': model,
113
+ 'language': agent_language,
114
+ }
115
+
116
+ try:
117
+ try:
118
+ schema_dict = json.loads(openapi_schema)
119
+ except json.decoder.JSONDecodeError:
120
+ schema_dict = yaml.safe_load(openapi_schema)
121
+ except Exception as e:
122
+ raise gr.Error(
123
+ f'OpenAPI schema format error, should be one of json and yaml: {e}'
124
+ )
125
+
126
+ openapi_plugin_cfg = {
127
+ 'schema': schema_dict,
128
+ 'auth': {
129
+ 'type': openapi_auth,
130
+ 'apikey': openapi_auth_apikey,
131
+ 'apikey_type': openapi_auth_apikey_type
132
+ },
133
+ 'privacy_policy': openapi_privacy_policy
134
+ }
135
+ if is_valid_plugin_configuration(openapi_plugin_cfg):
136
+ save_plugin_configuration(openapi_plugin_cfg, uuid_str)
137
+ except Exception as e:
138
+ logger.error(
139
+ uuid=uuid_str,
140
+ error=str(e),
141
+ content={'error_traceback': traceback.format_exc()})
142
+
143
+ save_builder_configuration(builder_cfg, uuid_str)
144
+ update_builder(uuid_str, state)
145
+ init_user(uuid_str, state)
146
+ return [
147
+ gr.HTML.update(
148
+ visible=True,
149
+ value=format_cover_html(builder_cfg, bot_avatar_path)),
150
+ gr.Chatbot.update(
151
+ visible=False,
152
+ avatar_images=get_avatar_image(bot_avatar, uuid_str)),
153
+ gr.Dataset.update(samples=suggestions_filtered),
154
+ gr.DataFrame.update(value=suggestions_filtered)
155
+ ]
156
+
157
+
158
+ # 创建 Gradio 界面
159
+ demo = gr.Blocks(css='assets/app.css')
160
+ with demo:
161
+
162
+ uuid_str = gr.Textbox(label='modelscope_uuid', visible=False)
163
+ draw_seed = random.randint(0, 1000000000)
164
+ state = gr.State({'session_seed': draw_seed})
165
+ i18n = I18n('zh-cn')
166
+ with gr.Row():
167
+ with gr.Column(scale=5):
168
+ header = gr.Markdown(i18n.get('header'))
169
+ with gr.Column(scale=1):
170
+ language = gr.Dropdown(
171
+ choices=[('中文', 'zh-cn'), ('English', 'en')],
172
+ show_label=False,
173
+ container=False,
174
+ value='zh-cn',
175
+ interactive=True)
176
+ with gr.Row():
177
+ with gr.Column():
178
+ with gr.Tabs() as tabs:
179
+ with gr.Tab(i18n.get_whole('create'), id=0) as create_tab:
180
+ with gr.Column():
181
+ # "Create" 标签页的 Chatbot 组件
182
+ start_text = '欢迎使用agent创建助手。我可以帮助您创建一个定制agent。'\
183
+ '您希望您的agent主要用于什么领域或任务?比如,您可以说,我想做一个RPG游戏agent'
184
+ create_chatbot = gr.Chatbot(
185
+ show_label=False, value=[[None, start_text]])
186
+ create_chat_input = gr.Textbox(
187
+ label=i18n.get('message'),
188
+ placeholder=i18n.get('message_placeholder'))
189
+ create_send_button = gr.Button(
190
+ i18n.get('sendOnLoading'), interactive=False)
191
+
192
+ configure_tab = gr.Tab(i18n.get_whole('configure'), id=1)
193
+ with configure_tab:
194
+ with gr.Column():
195
+ # "Configure" 标签页的配置输入字段
196
+ with gr.Row():
197
+ bot_avatar_comp = gr.Image(
198
+ label=i18n.get('form_avatar'),
199
+ placeholder='Chatbot avatar image',
200
+ source='upload',
201
+ interactive=True,
202
+ type='filepath',
203
+ scale=1,
204
+ width=182,
205
+ height=182,
206
+ )
207
+ with gr.Column(scale=4):
208
+ name_input = gr.Textbox(
209
+ label=i18n.get('form_name'),
210
+ placeholder=i18n.get(
211
+ 'form_name_placeholder'))
212
+ description_input = gr.Textbox(
213
+ label=i18n.get('form_description'),
214
+ placeholder=i18n.get(
215
+ 'form_description_placeholder'))
216
+
217
+ instructions_input = gr.Textbox(
218
+ label=i18n.get('form_instructions'),
219
+ placeholder=i18n.get(
220
+ 'form_instructions_placeholder'),
221
+ lines=3)
222
+ model_selector = gr.Dropdown(
223
+ label=i18n.get('form_model'))
224
+ agent_language_selector = gr.Dropdown(
225
+ label=i18n.get('form_agent_language'),
226
+ choices=['zh', 'en'],
227
+ value='zh')
228
+ suggestion_input = gr.Dataframe(
229
+ show_label=False,
230
+ value=[['']],
231
+ datatype=['str'],
232
+ headers=[i18n.get_whole('form_prompt_suggestion')],
233
+ type='array',
234
+ col_count=(1, 'fixed'),
235
+ interactive=True)
236
+ gr.Markdown(
237
+ '*注意:知识库上传的文本文档默认按照\\n\\n切分,pdf默认按照页切分。如果片段'
238
+ '对应的字符大于[配置文件](https://github.com/modelscope/modelscope-agent/'
239
+ 'blob/master/apps/agentfabric/config/model_config.json)中指定模型的'
240
+ 'knowledge限制,则在被召回时有可能会被截断。*')
241
+ knowledge_input = gr.File(
242
+ label=i18n.get('form_knowledge'),
243
+ file_count='multiple',
244
+ file_types=[
245
+ 'text', '.json', '.csv', '.pdf', '.md'
246
+ ])
247
+ capabilities_checkboxes = gr.CheckboxGroup(
248
+ label=i18n.get('form_capabilities'))
249
+
250
+ with gr.Accordion(
251
+ i18n.get('open_api_accordion'),
252
+ open=False) as open_api_accordion:
253
+ openapi_schema = gr.Textbox(
254
+ label='Schema',
255
+ placeholder=
256
+ 'Enter your OpenAPI schema here, JSON or YAML format only'
257
+ )
258
+
259
+ with gr.Group():
260
+ openapi_auth_type = gr.Radio(
261
+ label='Authentication Type',
262
+ choices=['None', 'API Key'],
263
+ value='None')
264
+ openapi_auth_apikey = gr.Textbox(
265
+ label='API Key',
266
+ placeholder='Enter your API Key here')
267
+ openapi_auth_apikey_type = gr.Radio(
268
+ label='API Key type', choices=['Bearer'])
269
+ openapi_privacy_policy = gr.Textbox(
270
+ label='Privacy Policy',
271
+ placeholder='Enter privacy policy URL')
272
+
273
+ configure_button = gr.Button(
274
+ i18n.get('form_update_button'))
275
+
276
+ with gr.Accordion(
277
+ label=i18n.get('import_config'),
278
+ open=False) as update_accordion:
279
+ with gr.Column():
280
+ update_space = gr.Textbox(
281
+ label=i18n.get('space_addr'),
282
+ placeholder=i18n.get('input_space_addr'))
283
+ import_button = gr.Button(
284
+ i18n.get_whole('import_space'))
285
+ gr.Markdown(
286
+ f'#### {i18n.get_whole("import_hint")}')
287
+
288
+ with gr.Column():
289
+ # Preview
290
+ preview_header = gr.HTML(
291
+ f"""<div class="preview_header">{i18n.get('preview')}<div>""")
292
+
293
+ user_chat_bot_cover = gr.HTML(format_cover_html({}, None))
294
+ user_chatbot = ChatBot(
295
+ value=[[None, None]],
296
+ elem_id='user_chatbot',
297
+ elem_classes=['markdown-body'],
298
+ avatar_images=get_avatar_image('', uuid_str),
299
+ height=650,
300
+ latex_delimiters=[],
301
+ show_label=False,
302
+ visible=False)
303
+ preview_chat_input = gr.Textbox(
304
+ label=i18n.get('message'),
305
+ placeholder=i18n.get('message_placeholder'))
306
+ user_chat_bot_suggest = gr.Dataset(
307
+ label=i18n.get('prompt_suggestion'),
308
+ components=[preview_chat_input],
309
+ samples=[])
310
+ # preview_send_button = gr.Button('Send')
311
+ with gr.Row():
312
+ upload_button = gr.UploadButton(
313
+ i18n.get('upload_btn'),
314
+ file_types=['file', 'image', 'audio', 'video', 'text'],
315
+ file_count='multiple')
316
+ preview_send_button = gr.Button(
317
+ i18n.get('sendOnLoading'), interactive=False)
318
+ user_chat_bot_suggest.select(
319
+ lambda evt: evt[0],
320
+ inputs=[user_chat_bot_suggest],
321
+ outputs=[preview_chat_input])
322
+ with gr.Accordion(
323
+ label=i18n.get('publish'),
324
+ open=False) as publish_accordion:
325
+ publish_alert_md = gr.Markdown(f'{i18n.get("publish_alert")}')
326
+ with gr.Row():
327
+ with gr.Column():
328
+ publish_button = gr.Button(i18n.get_whole('build'))
329
+ build_hint_md = gr.Markdown(
330
+ f'#### 1.{i18n.get("build_hint")}')
331
+
332
+ with gr.Column():
333
+ publish_link = gr.HTML(
334
+ value=format_goto_publish_html(
335
+ i18n.get_whole('publish'), '', {}, True))
336
+ publish_hint_md = gr.Markdown(
337
+ f'#### 2.{i18n.get("publish_hint")}')
338
+
339
+ configure_updated_outputs = [
340
+ state,
341
+ # config form
342
+ bot_avatar_comp,
343
+ name_input,
344
+ description_input,
345
+ instructions_input,
346
+ model_selector,
347
+ agent_language_selector,
348
+ suggestion_input,
349
+ knowledge_input,
350
+ capabilities_checkboxes,
351
+ # bot
352
+ user_chat_bot_cover,
353
+ user_chat_bot_suggest,
354
+ preview_send_button,
355
+ create_send_button,
356
+ ]
357
+
358
+ # 初始化表单
359
+ def init_ui_config(uuid_str, _state, builder_cfg, model_cfg, tool_cfg):
360
+ logger.info(
361
+ uuid=uuid_str,
362
+ message='builder_cfg',
363
+ content={'builder_cfg': str(builder_cfg)})
364
+ # available models
365
+ models = list(model_cfg.keys())
366
+ capabilities = [(tool_cfg[tool_key]['name'], tool_key)
367
+ for tool_key in tool_cfg.keys()
368
+ if tool_cfg[tool_key].get('is_active', False)]
369
+ _state['model_cfg'] = model_cfg
370
+ _state['tool_cfg'] = tool_cfg
371
+ _state['capabilities'] = capabilities
372
+ bot_avatar = get_avatar_image(builder_cfg.get('avatar', ''),
373
+ uuid_str)[1]
374
+ suggests = builder_cfg.get('prompt_recommend', [''])
375
+ return {
376
+ state:
377
+ _state,
378
+ bot_avatar_comp:
379
+ gr.Image.update(value=bot_avatar),
380
+ name_input:
381
+ builder_cfg.get('name', ''),
382
+ description_input:
383
+ builder_cfg.get('description'),
384
+ instructions_input:
385
+ builder_cfg.get('instruction'),
386
+ model_selector:
387
+ gr.Dropdown.update(
388
+ value=builder_cfg.get('model', models[0]), choices=models),
389
+ agent_language_selector:
390
+ builder_cfg.get('language') or 'zh',
391
+ suggestion_input:
392
+ [[str] for str in suggests] if len(suggests) > 0 else [['']],
393
+ knowledge_input:
394
+ builder_cfg.get('knowledge', [])
395
+ if len(builder_cfg['knowledge']) > 0 else None,
396
+ capabilities_checkboxes:
397
+ gr.CheckboxGroup.update(
398
+ value=[
399
+ tool for tool in builder_cfg.get('tools', {}).keys()
400
+ if builder_cfg.get('tools').get(tool).get('use', False)
401
+ ],
402
+ choices=capabilities),
403
+ # bot
404
+ user_chat_bot_cover:
405
+ format_cover_html(builder_cfg, bot_avatar),
406
+ user_chat_bot_suggest:
407
+ gr.Dataset.update(samples=[[item] for item in suggests]),
408
+ }
409
+
410
+ # tab 切换的事件处理
411
+ def on_congifure_tab_select(_state, uuid_str):
412
+ uuid_str = check_uuid(uuid_str)
413
+ configure_updated = _state.get('configure_updated', False)
414
+ if configure_updated:
415
+ builder_cfg, model_cfg, tool_cfg, available_tool_list, _, _ = parse_configuration(
416
+ uuid_str)
417
+ _state['configure_updated'] = False
418
+ return init_ui_config(uuid_str, _state, builder_cfg, model_cfg,
419
+ tool_cfg)
420
+ else:
421
+ return {state: _state}
422
+
423
+ configure_tab.select(
424
+ on_congifure_tab_select,
425
+ inputs=[state, uuid_str],
426
+ outputs=configure_updated_outputs)
427
+
428
+ # 配置 "Create" 标签页的消息发送功能
429
+ def format_message_with_builder_cfg(_state, chatbot, builder_cfg,
430
+ uuid_str):
431
+ uuid_str = check_uuid(uuid_str)
432
+ bot_avatar = builder_cfg.get('avatar', '')
433
+ prompt_recommend = builder_cfg.get('prompt_recommend', [''])
434
+ suggestion = [[row] for row in prompt_recommend]
435
+ bot_avatar_path = get_avatar_image(bot_avatar, uuid_str)[1]
436
+ save_builder_configuration(builder_cfg, uuid_str)
437
+ _state['configure_updated'] = True
438
+ return {
439
+ create_chatbot:
440
+ chatbot,
441
+ user_chat_bot_cover:
442
+ gr.HTML.update(
443
+ visible=True,
444
+ value=format_cover_html(builder_cfg, bot_avatar_path)),
445
+ user_chatbot:
446
+ gr.Chatbot.update(
447
+ visible=False,
448
+ avatar_images=get_avatar_image(bot_avatar, uuid_str)),
449
+ user_chat_bot_suggest:
450
+ gr.Dataset.update(samples=suggestion)
451
+ }
452
+
453
+ def create_send_message(chatbot, input, _state, uuid_str):
454
+ uuid_str = check_uuid(uuid_str)
455
+ # 将发送的消息添加到聊天历史
456
+ builder_agent = _state['builder_agent']
457
+ chatbot.append((input, ''))
458
+ yield {
459
+ create_chatbot: chatbot,
460
+ create_chat_input: gr.Textbox.update(value=''),
461
+ }
462
+ response = ''
463
+ for frame in builder_agent.stream_run(
464
+ input, print_info=True, uuid_str=uuid_str):
465
+ llm_result = frame.get('llm_text', '')
466
+ exec_result = frame.get('exec_result', '')
467
+ step_result = frame.get('step', '')
468
+ logger.info(
469
+ uuid=uuid_str, message='frame', content={'frame': str(frame)})
470
+ if len(exec_result) != 0:
471
+ if isinstance(exec_result, dict):
472
+ exec_result = exec_result['result']
473
+ assert isinstance(exec_result, Config)
474
+ yield format_message_with_builder_cfg(
475
+ _state,
476
+ chatbot,
477
+ exec_result.to_dict(),
478
+ uuid_str=uuid_str)
479
+ else:
480
+ # llm result
481
+ if isinstance(llm_result, dict):
482
+ content = llm_result['content']
483
+ else:
484
+ content = llm_result
485
+ frame_text = content
486
+ response = beauty_output(f'{response}{frame_text}',
487
+ step_result)
488
+ chatbot[-1] = (input, response)
489
+ yield {
490
+ create_chatbot: chatbot,
491
+ }
492
+
493
+ create_send_button.click(
494
+ create_send_message,
495
+ inputs=[create_chatbot, create_chat_input, state, uuid_str],
496
+ outputs=[
497
+ create_chatbot, user_chat_bot_cover, user_chatbot,
498
+ user_chat_bot_suggest, create_chat_input
499
+ ])
500
+
501
+ # 配置 "Configure" 标签页的提交按钮功能
502
+ configure_button.click(
503
+ process_configuration,
504
+ inputs=[
505
+ uuid_str, bot_avatar_comp, name_input, description_input,
506
+ instructions_input, model_selector, agent_language_selector,
507
+ suggestion_input, knowledge_input, capabilities_checkboxes,
508
+ openapi_schema, openapi_auth_type, openapi_auth_apikey,
509
+ openapi_auth_apikey_type, openapi_privacy_policy, state
510
+ ],
511
+ outputs=[
512
+ user_chat_bot_cover, user_chatbot, user_chat_bot_suggest,
513
+ suggestion_input
514
+ ])
515
+
516
+ # 配置 "Preview" 的消息发送功能
517
+ def preview_send_message(chatbot, input, _state, uuid_str):
518
+ # 将发送的消息添加到聊天历史
519
+ _uuid_str = check_uuid(uuid_str)
520
+ user_agent = _state['user_agent']
521
+ if 'new_file_paths' in _state:
522
+ new_file_paths = _state['new_file_paths']
523
+ else:
524
+ new_file_paths = []
525
+ _state['new_file_paths'] = []
526
+
527
+ chatbot.append((input, ''))
528
+ yield {
529
+ user_chatbot: gr.Chatbot.update(visible=True, value=chatbot),
530
+ user_chat_bot_cover: gr.HTML.update(visible=False),
531
+ preview_chat_input: gr.Textbox.update(value='')
532
+ }
533
+
534
+ response = ''
535
+ try:
536
+ for frame in user_agent.stream_run(
537
+ input,
538
+ print_info=True,
539
+ remote=False,
540
+ append_files=new_file_paths,
541
+ uuid=_uuid_str):
542
+ llm_result = frame.get('llm_text', '')
543
+ exec_result = frame.get('exec_result', '')
544
+ if len(exec_result) != 0:
545
+ # action_exec_result
546
+ if isinstance(exec_result, dict):
547
+ exec_result = str(exec_result['result'])
548
+ frame_text = f'<result>{exec_result}</result>'
549
+ else:
550
+ # llm result
551
+ frame_text = llm_result
552
+
553
+ # important! do not change this
554
+ response += frame_text
555
+ chatbot[-1] = (input, response)
556
+ yield {user_chatbot: chatbot}
557
+ except Exception as e:
558
+ if 'dashscope.common.error.AuthenticationError' in str(e):
559
+ msg = 'DASHSCOPE_API_KEY should be set via environment variable. You can acquire this in ' \
560
+ 'https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key'
561
+ elif 'rate limit' in str(e):
562
+ msg = 'Too many people are calling, please try again later.'
563
+ else:
564
+ msg = str(e)
565
+ chatbot[-1] = (input, msg)
566
+ yield {user_chatbot: chatbot}
567
+
568
+ preview_send_button.click(
569
+ preview_send_message,
570
+ inputs=[user_chatbot, preview_chat_input, state, uuid_str],
571
+ outputs=[user_chatbot, user_chat_bot_cover, preview_chat_input])
572
+
573
+ def upload_file(chatbot, upload_button, _state, uuid_str):
574
+ uuid_str = check_uuid(uuid_str)
575
+ new_file_paths = []
576
+ if 'file_paths' in _state:
577
+ file_paths = _state['file_paths']
578
+ else:
579
+ file_paths = []
580
+ for file in upload_button:
581
+ file_name = os.path.basename(file.name)
582
+ # covert xxx.json to xxx_uuid_str.json
583
+ file_name = file_name.replace('.', f'_{uuid_str}.')
584
+ file_path = os.path.join(get_ci_dir(), file_name)
585
+ if not os.path.exists(file_path):
586
+ # make sure file path's directory exists
587
+ os.makedirs(os.path.dirname(file_path), exist_ok=True)
588
+ shutil.copy(file.name, file_path)
589
+ file_paths.append(file_path)
590
+ new_file_paths.append(file_path)
591
+ if file_name.endswith(('.jpeg', '.png', '.jpg')):
592
+ chatbot += [((file_path, ), None)]
593
+
594
+ else:
595
+ chatbot.append((None, f'上传文���{file_name},成功'))
596
+ yield {
597
+ user_chatbot: gr.Chatbot.update(visible=True, value=chatbot),
598
+ user_chat_bot_cover: gr.HTML.update(visible=False),
599
+ preview_chat_input: gr.Textbox.update(value='')
600
+ }
601
+
602
+ _state['file_paths'] = file_paths
603
+ _state['new_file_paths'] = new_file_paths
604
+
605
+ upload_button.upload(
606
+ upload_file,
607
+ inputs=[user_chatbot, upload_button, state, uuid_str],
608
+ outputs=[user_chatbot, user_chat_bot_cover, preview_chat_input])
609
+
610
+ # configuration for publish
611
+ def publish_agent(name, uuid_str, state):
612
+ uuid_str = check_uuid(uuid_str)
613
+ env_params = {}
614
+ env_params.update(
615
+ pop_user_info_from_config(DEFAULT_AGENT_DIR, uuid_str))
616
+ output_url, envs_required = prepare_agent_zip(name, DEFAULT_AGENT_DIR,
617
+ uuid_str, state)
618
+ env_params.update(envs_required)
619
+ # output_url = "https://test.url"
620
+ return format_goto_publish_html(
621
+ i18n.get_whole('publish'), output_url, env_params)
622
+
623
+ publish_button.click(
624
+ publish_agent,
625
+ inputs=[name_input, uuid_str, state],
626
+ outputs=[publish_link],
627
+ )
628
+
629
+ def import_space(agent_url, uuid_str, state):
630
+ uuid_str = check_uuid(uuid_str)
631
+ _ = reload_agent_zip(agent_url, DEFAULT_AGENT_DIR, uuid_str, state)
632
+
633
+ # update config
634
+ builder_cfg, model_cfg, tool_cfg, available_tool_list, _, _ = parse_configuration(
635
+ uuid_str)
636
+ return init_ui_config(uuid_str, state, builder_cfg, model_cfg,
637
+ tool_cfg)
638
+
639
+ import_button.click(
640
+ import_space,
641
+ inputs=[update_space, uuid_str, state],
642
+ outputs=configure_updated_outputs,
643
+ )
644
+
645
+ def change_lang(language):
646
+ i18n = I18n(language)
647
+ return {
648
+ bot_avatar_comp:
649
+ gr.Image(label=i18n.get('form_avatar')),
650
+ name_input:
651
+ gr.Textbox(
652
+ label=i18n.get('form_name'),
653
+ placeholder=i18n.get('form_name_placeholder')),
654
+ description_input:
655
+ gr.Textbox(
656
+ label=i18n.get('form_description'),
657
+ placeholder=i18n.get('form_description_placeholder')),
658
+ instructions_input:
659
+ gr.Textbox(
660
+ label=i18n.get('form_instructions'),
661
+ placeholder=i18n.get('form_instructions_placeholder')),
662
+ model_selector:
663
+ gr.Dropdown(label=i18n.get('form_model')),
664
+ agent_language_selector:
665
+ gr.Dropdown(label=i18n.get('form_agent_language')),
666
+ knowledge_input:
667
+ gr.File(label=i18n.get('form_knowledge')),
668
+ capabilities_checkboxes:
669
+ gr.CheckboxGroup(label=i18n.get('form_capabilities')),
670
+ open_api_accordion:
671
+ gr.Accordion(label=i18n.get('open_api_accordion')),
672
+ configure_button:
673
+ gr.Button(i18n.get('form_update_button')),
674
+ preview_header:
675
+ gr.HTML(
676
+ f"""<div class="preview_header">{i18n.get('preview')}<div>"""),
677
+ preview_send_button:
678
+ gr.Button.update(value=i18n.get('send')),
679
+ create_chat_input:
680
+ gr.Textbox(
681
+ label=i18n.get('message'),
682
+ placeholder=i18n.get('message_placeholder')),
683
+ create_send_button:
684
+ gr.Button.update(value=i18n.get('send')),
685
+ user_chat_bot_suggest:
686
+ gr.Dataset(label=i18n.get('prompt_suggestion')),
687
+ preview_chat_input:
688
+ gr.Textbox(
689
+ label=i18n.get('message'),
690
+ placeholder=i18n.get('message_placeholder')),
691
+ publish_accordion:
692
+ gr.Accordion(label=i18n.get('publish')),
693
+ upload_button:
694
+ gr.UploadButton(i18n.get('upload_btn')),
695
+ header:
696
+ gr.Markdown(i18n.get('header')),
697
+ publish_alert_md:
698
+ gr.Markdown(f'{i18n.get("publish_alert")}'),
699
+ build_hint_md:
700
+ gr.Markdown(f'#### 1.{i18n.get("build_hint")}'),
701
+ publish_hint_md:
702
+ gr.Markdown(f'#### 2.{i18n.get("publish_hint")}'),
703
+ }
704
+
705
+ language.select(
706
+ change_lang,
707
+ inputs=[language],
708
+ outputs=configure_updated_outputs + [
709
+ configure_button, create_chat_input, open_api_accordion,
710
+ preview_header, preview_chat_input, publish_accordion,
711
+ upload_button, header, publish_alert_md, build_hint_md,
712
+ publish_hint_md
713
+ ])
714
+
715
+ def init_all(uuid_str, _state):
716
+ uuid_str = check_uuid(uuid_str)
717
+ builder_cfg, model_cfg, tool_cfg, available_tool_list, _, _ = parse_configuration(
718
+ uuid_str)
719
+ ret = init_ui_config(uuid_str, _state, builder_cfg, model_cfg,
720
+ tool_cfg)
721
+ yield ret
722
+ init_user(uuid_str, _state)
723
+ init_builder(uuid_str, _state)
724
+ yield {
725
+ state:
726
+ _state,
727
+ preview_send_button:
728
+ gr.Button.update(value=i18n.get('send'), interactive=True),
729
+ create_send_button:
730
+ gr.Button.update(value=i18n.get('send'), interactive=True),
731
+ }
732
+
733
+ demo.load(
734
+ init_all, inputs=[uuid_str, state], outputs=configure_updated_outputs)
735
+
736
+ demo.queue(concurrency_count=10)
737
+ demo.launch(show_error=True)
appBot.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import shutil
4
+ import traceback
5
+
6
+ import gradio as gr
7
+ from config_utils import get_avatar_image, get_ci_dir, parse_configuration
8
+ from gradio_utils import ChatBot, format_cover_html
9
+ from modelscope_agent.utils.logger import agent_logger as logger
10
+ from user_core import init_user_chatbot_agent
11
+
12
+ uuid_str = 'local_user'
13
+ builder_cfg, model_cfg, tool_cfg, available_tool_list, _, _ = parse_configuration(
14
+ uuid_str)
15
+ suggests = builder_cfg.get('prompt_recommend', [])
16
+ avatar_pairs = get_avatar_image(builder_cfg.get('avatar', ''), uuid_str)
17
+
18
+ customTheme = gr.themes.Default(
19
+ primary_hue=gr.themes.utils.colors.blue,
20
+ radius_size=gr.themes.utils.sizes.radius_none,
21
+ )
22
+
23
+
24
+ def check_uuid(uuid_str):
25
+ if not uuid_str or uuid_str == '':
26
+ if os.getenv('MODELSCOPE_ENVIRONMENT') == 'studio':
27
+ raise gr.Error('请登陆后使用! (Please login first)')
28
+ else:
29
+ uuid_str = 'local_user'
30
+ return uuid_str
31
+
32
+
33
+ def init_user(state):
34
+ try:
35
+ seed = state.get('session_seed', random.randint(0, 1000000000))
36
+ user_agent = init_user_chatbot_agent(uuid_str)
37
+ user_agent.seed = seed
38
+ state['user_agent'] = user_agent
39
+ except Exception as e:
40
+ logger.error(
41
+ uuid=uuid_str,
42
+ error=str(e),
43
+ content={'error_traceback': traceback.format_exc()})
44
+ return state
45
+
46
+
47
+ # 创建 Gradio 界面
48
+ demo = gr.Blocks(css='assets/appBot.css', theme=customTheme)
49
+ with demo:
50
+ gr.Markdown(
51
+ '# <center> \N{fire} AgentFabric powered by Modelscope-agent ([github star](https://github.com/modelscope/modelscope-agent/tree/main))</center>' # noqa E501
52
+ )
53
+ draw_seed = random.randint(0, 1000000000)
54
+ state = gr.State({'session_seed': draw_seed})
55
+ with gr.Row(elem_classes='container'):
56
+ with gr.Column(scale=4):
57
+ with gr.Column():
58
+ # Preview
59
+ user_chatbot = ChatBot(
60
+ value=[[None, '尝试问我一点什么吧~']],
61
+ elem_id='user_chatbot',
62
+ elem_classes=['markdown-body'],
63
+ avatar_images=avatar_pairs,
64
+ height=600,
65
+ latex_delimiters=[],
66
+ show_label=False)
67
+ with gr.Row():
68
+ with gr.Column(scale=12):
69
+ preview_chat_input = gr.Textbox(
70
+ show_label=False,
71
+ container=False,
72
+ placeholder='跟我聊聊吧~')
73
+ with gr.Column(min_width=70, scale=1):
74
+ upload_button = gr.UploadButton(
75
+ '上传',
76
+ file_types=['file', 'image', 'audio', 'video', 'text'],
77
+ file_count='multiple')
78
+ with gr.Column(min_width=70, scale=1):
79
+ preview_send_button = gr.Button('发送', variant='primary')
80
+
81
+ with gr.Column(scale=1):
82
+ user_chat_bot_cover = gr.HTML(
83
+ format_cover_html(builder_cfg, avatar_pairs[1]))
84
+ user_chat_bot_suggest = gr.Examples(
85
+ label='Prompt Suggestions',
86
+ examples=suggests,
87
+ inputs=[preview_chat_input])
88
+
89
+ def upload_file(chatbot, upload_button, _state):
90
+ _uuid_str = check_uuid(uuid_str)
91
+ new_file_paths = []
92
+ if 'file_paths' in _state:
93
+ file_paths = _state['file_paths']
94
+ else:
95
+ file_paths = []
96
+ for file in upload_button:
97
+ file_name = os.path.basename(file.name)
98
+ # covert xxx.json to xxx_uuid_str.json
99
+ file_name = file_name.replace('.', f'_{_uuid_str}.')
100
+ file_path = os.path.join(get_ci_dir(), file_name)
101
+ if not os.path.exists(file_path):
102
+ # make sure file path's directory exists
103
+ os.makedirs(os.path.dirname(file_path), exist_ok=True)
104
+ shutil.copy(file.name, file_path)
105
+ file_paths.append(file_path)
106
+ new_file_paths.append(file_path)
107
+ if file_name.endswith(('.jpeg', '.png', '.jpg')):
108
+ chatbot += [((file_path, ), None)]
109
+
110
+ else:
111
+ chatbot.append((None, f'上传文件{file_name},成功'))
112
+ yield {
113
+ user_chatbot: gr.Chatbot.update(visible=True, value=chatbot),
114
+ preview_chat_input: gr.Textbox.update(value='')
115
+ }
116
+
117
+ _state['file_paths'] = file_paths
118
+ _state['new_file_paths'] = new_file_paths
119
+
120
+ upload_button.upload(
121
+ upload_file,
122
+ inputs=[user_chatbot, upload_button, state],
123
+ outputs=[user_chatbot, preview_chat_input])
124
+
125
+ def send_message(chatbot, input, _state):
126
+ # 将发送的消息添加到聊天历史
127
+ user_agent = _state['user_agent']
128
+ if 'new_file_paths' in _state:
129
+ new_file_paths = _state['new_file_paths']
130
+ else:
131
+ new_file_paths = []
132
+ _state['new_file_paths'] = []
133
+ chatbot.append((input, ''))
134
+ yield {
135
+ user_chatbot: chatbot,
136
+ preview_chat_input: gr.Textbox.update(value=''),
137
+ }
138
+
139
+ response = ''
140
+ try:
141
+ for frame in user_agent.stream_run(
142
+ input,
143
+ print_info=True,
144
+ remote=False,
145
+ append_files=new_file_paths):
146
+ # is_final = frame.get("frame_is_final")
147
+ llm_result = frame.get('llm_text', '')
148
+ exec_result = frame.get('exec_result', '')
149
+ # llm_result = llm_result.split("<|user|>")[0].strip()
150
+ if len(exec_result) != 0:
151
+ # action_exec_result
152
+ if isinstance(exec_result, dict):
153
+ exec_result = str(exec_result['result'])
154
+ frame_text = f'<result>{exec_result}</result>'
155
+ else:
156
+ # llm result
157
+ frame_text = llm_result
158
+
159
+ # important! do not change this
160
+ response += frame_text
161
+ chatbot[-1] = (input, response)
162
+ yield {
163
+ user_chatbot: chatbot,
164
+ }
165
+ except Exception as e:
166
+ if 'dashscope.common.error.AuthenticationError' in str(e):
167
+ msg = 'DASHSCOPE_API_KEY should be set via environment variable. You can acquire this in ' \
168
+ 'https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key'
169
+ elif 'rate limit' in str(e):
170
+ msg = 'Too many people are calling, please try again later.'
171
+ else:
172
+ msg = str(e)
173
+ chatbot[-1] = (input, msg)
174
+ yield {user_chatbot: chatbot}
175
+
176
+ preview_send_button.click(
177
+ send_message,
178
+ inputs=[user_chatbot, preview_chat_input, state],
179
+ outputs=[user_chatbot, preview_chat_input])
180
+
181
+ demo.load(init_user, inputs=[state], outputs=[state])
182
+
183
+ demo.queue()
184
+ demo.launch()
assets/app.css ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* code highlight: https://python-markdown.github.io/extensions/code_hilite/ */
2
+ .codehilite .hll { background-color: #ffffcc }
3
+ .codehilite { background: #f8f8f8; }
4
+ .codehilite .c { color: #408080; font-style: italic } /* Comment */
5
+ .codehilite .err { border: 1px solid #FF0000 } /* Error */
6
+ .codehilite .k { color: #008000; font-weight: bold } /* Keyword */
7
+ .codehilite .o { color: #666666 } /* Operator */
8
+ .codehilite .ch { color: #408080; font-style: italic } /* Comment.Hashbang */
9
+ .codehilite .cm { color: #408080; font-style: italic } /* Comment.Multiline */
10
+ .codehilite .cp { color: #BC7A00 } /* Comment.Preproc */
11
+ .codehilite .cpf { color: #408080; font-style: italic } /* Comment.PreprocFile */
12
+ .codehilite .c1 { color: #408080; font-style: italic } /* Comment.Single */
13
+ .codehilite .cs { color: #408080; font-style: italic } /* Comment.Special */
14
+ .codehilite .gd { color: #A00000 } /* Generic.Deleted */
15
+ .codehilite .ge { font-style: italic } /* Generic.Emph */
16
+ .codehilite .gr { color: #FF0000 } /* Generic.Error */
17
+ .codehilite .gh { color: #000080; font-weight: bold } /* Generic.Heading */
18
+ .codehilite .gi { color: #00A000 } /* Generic.Inserted */
19
+ .codehilite .go { color: #888888 } /* Generic.Output */
20
+ .codehilite .gp { color: #000080; font-weight: bold } /* Generic.Prompt */
21
+ .codehilite .gs { font-weight: bold } /* Generic.Strong */
22
+ .codehilite .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
23
+ .codehilite .gt { color: #0044DD } /* Generic.Traceback */
24
+ .codehilite .kc { color: #008000; font-weight: bold } /* Keyword.Constant */
25
+ .codehilite .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
26
+ .codehilite .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
27
+ .codehilite .kp { color: #008000 } /* Keyword.Pseudo */
28
+ .codehilite .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
29
+ .codehilite .kt { color: #B00040 } /* Keyword.Type */
30
+ .codehilite .m { color: #666666 } /* Literal.Number */
31
+ .codehilite .s { color: #BA2121 } /* Literal.String */
32
+ .codehilite .na { color: #7D9029 } /* Name.Attribute */
33
+ .codehilite .nb { color: #008000 } /* Name.Builtin */
34
+ .codehilite .nc { color: #0000FF; font-weight: bold } /* Name.Class */
35
+ .codehilite .no { color: #880000 } /* Name.Constant */
36
+ .codehilite .nd { color: #AA22FF } /* Name.Decorator */
37
+ .codehilite .ni { color: #999999; font-weight: bold } /* Name.Entity */
38
+ .codehilite .ne { color: #D2413A; font-weight: bold } /* Name.Exception */
39
+ .codehilite .nf { color: #0000FF } /* Name.Function */
40
+ .codehilite .nl { color: #A0A000 } /* Name.Label */
41
+ .codehilite .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
42
+ .codehilite .nt { color: #008000; font-weight: bold } /* Name.Tag */
43
+ .codehilite .nv { color: #19177C } /* Name.Variable */
44
+ .codehilite .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
45
+ .codehilite .w { color: #bbbbbb } /* Text.Whitespace */
46
+ .codehilite .mb { color: #666666 } /* Literal.Number.Bin */
47
+ .codehilite .mf { color: #666666 } /* Literal.Number.Float */
48
+ .codehilite .mh { color: #666666 } /* Literal.Number.Hex */
49
+ .codehilite .mi { color: #666666 } /* Literal.Number.Integer */
50
+ .codehilite .mo { color: #666666 } /* Literal.Number.Oct */
51
+ .codehilite .sa { color: #BA2121 } /* Literal.String.Affix */
52
+ .codehilite .sb { color: #BA2121 } /* Literal.String.Backtick */
53
+ .codehilite .sc { color: #BA2121 } /* Literal.String.Char */
54
+ .codehilite .dl { color: #BA2121 } /* Literal.String.Delimiter */
55
+ .codehilite .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
56
+ .codehilite .s2 { color: #BA2121 } /* Literal.String.Double */
57
+ .codehilite .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
58
+ .codehilite .sh { color: #BA2121 } /* Literal.String.Heredoc */
59
+ .codehilite .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
60
+ .codehilite .sx { color: #008000 } /* Literal.String.Other */
61
+ .codehilite .sr { color: #BB6688 } /* Literal.String.Regex */
62
+ .codehilite .s1 { color: #BA2121 } /* Literal.String.Single */
63
+ .codehilite .ss { color: #19177C } /* Literal.String.Symbol */
64
+ .codehilite .bp { color: #008000 } /* Name.Builtin.Pseudo */
65
+ .codehilite .fm { color: #0000FF } /* Name.Function.Magic */
66
+ .codehilite .vc { color: #19177C } /* Name.Variable.Class */
67
+ .codehilite .vg { color: #19177C } /* Name.Variable.Global */
68
+ .codehilite .vi { color: #19177C } /* Name.Variable.Instance */
69
+ .codehilite .vm { color: #19177C } /* Name.Variable.Magic */
70
+ .codehilite .il { color: #666666 } /* Literal.Number.Integer.Long */
71
+
72
+ .preview_header {
73
+ font-size: 18px;
74
+ font-weight: 500;
75
+ text-align: center;
76
+ margin-bottom: -12px;
77
+ }
78
+
79
+ .bot_cover {
80
+ display: flex;
81
+ flex-direction: column;
82
+ justify-content: center;
83
+ align-items: center;
84
+ min-height: 650px;
85
+ border: 1px solid rgb(229, 231, 235);
86
+ border-radius: 8px;
87
+ padding: 20px 40px;
88
+ }
89
+
90
+ .bot_avatar {
91
+ width: 100px;
92
+ height: 100px;
93
+ border-radius: 50%;
94
+ overflow: hidden;
95
+ }
96
+
97
+ .bot_avatar img {
98
+ width: 100px;
99
+ height: 100px;
100
+ }
101
+
102
+ .bot_name {
103
+ font-size: 36px;
104
+ margin-top: 10px;
105
+ }
106
+
107
+ .bot_desp {
108
+ color: #ddd;
109
+ }
110
+
111
+ .publish_link_container > a {
112
+ display: block;
113
+ border-radius: var(--button-large-radius);
114
+ padding: var(--button-large-padding);
115
+ font-weight: var(--button-large-text-weight);
116
+ font-size: var(--button-large-text-size);
117
+ border: var(--button-border-width) solid var(--button-secondary-border-color);
118
+ background: var(--button-secondary-background-fill);
119
+ color: var(--button-secondary-text-color) !important;
120
+ cursor: pointer;
121
+ text-decoration: none !important;
122
+ text-align: center;
123
+ }
124
+
125
+ .publish_link_container > .disabled {
126
+ cursor: not-allowed;
127
+ opacity: .5;
128
+ filter: grayscale(30%);
129
+ }
130
+
131
+ .markdown-body .message {
132
+ white-space: pre-wrap;
133
+ }
134
+
135
+ .markdown-body details {
136
+ white-space: nowrap;
137
+ }
138
+ .markdown-body .bot details:not(:last-child) {
139
+ margin-bottom: 1px;
140
+ }
141
+ .markdown-body summary {
142
+ background-color: #4b5563;
143
+ color: #eee;
144
+ padding: 0 4px;
145
+ border-radius: 4px;
146
+ font-size: 0.9em;
147
+ }
assets/appBot.css ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* code highlight: https://python-markdown.github.io/extensions/code_hilite/ */
2
+ .codehilite .hll { background-color: #ffffcc }
3
+ .codehilite { background: #f8f8f8; }
4
+ .codehilite .c { color: #408080; font-style: italic } /* Comment */
5
+ .codehilite .err { border: 1px solid #FF0000 } /* Error */
6
+ .codehilite .k { color: #008000; font-weight: bold } /* Keyword */
7
+ .codehilite .o { color: #666666 } /* Operator */
8
+ .codehilite .ch { color: #408080; font-style: italic } /* Comment.Hashbang */
9
+ .codehilite .cm { color: #408080; font-style: italic } /* Comment.Multiline */
10
+ .codehilite .cp { color: #BC7A00 } /* Comment.Preproc */
11
+ .codehilite .cpf { color: #408080; font-style: italic } /* Comment.PreprocFile */
12
+ .codehilite .c1 { color: #408080; font-style: italic } /* Comment.Single */
13
+ .codehilite .cs { color: #408080; font-style: italic } /* Comment.Special */
14
+ .codehilite .gd { color: #A00000 } /* Generic.Deleted */
15
+ .codehilite .ge { font-style: italic } /* Generic.Emph */
16
+ .codehilite .gr { color: #FF0000 } /* Generic.Error */
17
+ .codehilite .gh { color: #000080; font-weight: bold } /* Generic.Heading */
18
+ .codehilite .gi { color: #00A000 } /* Generic.Inserted */
19
+ .codehilite .go { color: #888888 } /* Generic.Output */
20
+ .codehilite .gp { color: #000080; font-weight: bold } /* Generic.Prompt */
21
+ .codehilite .gs { font-weight: bold } /* Generic.Strong */
22
+ .codehilite .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
23
+ .codehilite .gt { color: #0044DD } /* Generic.Traceback */
24
+ .codehilite .kc { color: #008000; font-weight: bold } /* Keyword.Constant */
25
+ .codehilite .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
26
+ .codehilite .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
27
+ .codehilite .kp { color: #008000 } /* Keyword.Pseudo */
28
+ .codehilite .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
29
+ .codehilite .kt { color: #B00040 } /* Keyword.Type */
30
+ .codehilite .m { color: #666666 } /* Literal.Number */
31
+ .codehilite .s { color: #BA2121 } /* Literal.String */
32
+ .codehilite .na { color: #7D9029 } /* Name.Attribute */
33
+ .codehilite .nb { color: #008000 } /* Name.Builtin */
34
+ .codehilite .nc { color: #0000FF; font-weight: bold } /* Name.Class */
35
+ .codehilite .no { color: #880000 } /* Name.Constant */
36
+ .codehilite .nd { color: #AA22FF } /* Name.Decorator */
37
+ .codehilite .ni { color: #999999; font-weight: bold } /* Name.Entity */
38
+ .codehilite .ne { color: #D2413A; font-weight: bold } /* Name.Exception */
39
+ .codehilite .nf { color: #0000FF } /* Name.Function */
40
+ .codehilite .nl { color: #A0A000 } /* Name.Label */
41
+ .codehilite .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
42
+ .codehilite .nt { color: #008000; font-weight: bold } /* Name.Tag */
43
+ .codehilite .nv { color: #19177C } /* Name.Variable */
44
+ .codehilite .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
45
+ .codehilite .w { color: #bbbbbb } /* Text.Whitespace */
46
+ .codehilite .mb { color: #666666 } /* Literal.Number.Bin */
47
+ .codehilite .mf { color: #666666 } /* Literal.Number.Float */
48
+ .codehilite .mh { color: #666666 } /* Literal.Number.Hex */
49
+ .codehilite .mi { color: #666666 } /* Literal.Number.Integer */
50
+ .codehilite .mo { color: #666666 } /* Literal.Number.Oct */
51
+ .codehilite .sa { color: #BA2121 } /* Literal.String.Affix */
52
+ .codehilite .sb { color: #BA2121 } /* Literal.String.Backtick */
53
+ .codehilite .sc { color: #BA2121 } /* Literal.String.Char */
54
+ .codehilite .dl { color: #BA2121 } /* Literal.String.Delimiter */
55
+ .codehilite .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
56
+ .codehilite .s2 { color: #BA2121 } /* Literal.String.Double */
57
+ .codehilite .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
58
+ .codehilite .sh { color: #BA2121 } /* Literal.String.Heredoc */
59
+ .codehilite .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
60
+ .codehilite .sx { color: #008000 } /* Literal.String.Other */
61
+ .codehilite .sr { color: #BB6688 } /* Literal.String.Regex */
62
+ .codehilite .s1 { color: #BA2121 } /* Literal.String.Single */
63
+ .codehilite .ss { color: #19177C } /* Literal.String.Symbol */
64
+ .codehilite .bp { color: #008000 } /* Name.Builtin.Pseudo */
65
+ .codehilite .fm { color: #0000FF } /* Name.Function.Magic */
66
+ .codehilite .vc { color: #19177C } /* Name.Variable.Class */
67
+ .codehilite .vg { color: #19177C } /* Name.Variable.Global */
68
+ .codehilite .vi { color: #19177C } /* Name.Variable.Instance */
69
+ .codehilite .vm { color: #19177C } /* Name.Variable.Magic */
70
+ .codehilite .il { color: #666666 } /* Literal.Number.Integer.Long */
71
+
72
+ .preview_header {
73
+ font-size: 24px;
74
+ font-weight: 500;
75
+ text-align: center;
76
+ }
77
+
78
+ .bot_cover {
79
+ display: flex;
80
+ flex-direction: column;
81
+ justify-content: center;
82
+ align-items: center;
83
+ min-height: 300px;
84
+ border: 1px solid rgb(229, 231, 235);
85
+ padding: 20px 20px;
86
+ }
87
+
88
+ .bot_avatar {
89
+ width: 100px;
90
+ height: 100px;
91
+ border-radius: 50%;
92
+ overflow: hidden;
93
+ }
94
+
95
+ .bot_avatar img {
96
+ width: 100px;
97
+ height: 100px;
98
+ }
99
+
100
+ .bot_name {
101
+ font-size: 36px;
102
+ margin-top: 10px;
103
+ }
104
+
105
+ .bot_desp {
106
+ color: #ddd;
107
+ }
108
+
109
+ .container {
110
+ flex-direction: row-reverse;
111
+ }
112
+
113
+ .markdown-body .message {
114
+ white-space: pre-wrap;
115
+ }
116
+
117
+ .markdown-body details {
118
+ white-space: nowrap;
119
+ }
120
+ .markdown-body .bot details:not(:last-child) {
121
+ margin-bottom: 1px;
122
+ }
123
+ .markdown-body summary {
124
+ background-color: #4b5563;
125
+ color: #eee;
126
+ padding: 0 4px;
127
+ border-radius: 4px;
128
+ font-size: 0.9em;
129
+ }
assets/bot.jpg ADDED
assets/user.jpg ADDED
builder_core.py ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # flake8: noqa E501
2
+ import re
3
+ from typing import Dict
4
+
5
+ import json
6
+ from builder_prompt import BuilderPromptGenerator
7
+ from builder_prompt_zh import ZhBuilderPromptGenerator
8
+ from config_utils import parse_configuration
9
+ from help_tools import LogoGeneratorTool, config_conversion
10
+ from modelscope_agent import prompt_generator_register
11
+ from modelscope_agent.agent import AgentExecutor
12
+ from modelscope_agent.agent_types import AgentType
13
+ from modelscope_agent.llm import LLMFactory
14
+ from modelscope_agent.prompt import MessagesGenerator
15
+ from modelscope_agent.utils.logger import agent_logger as logger
16
+
17
+ prompts = {
18
+ 'BuilderPromptGenerator': BuilderPromptGenerator,
19
+ 'ZhBuilderPromptGenerator': ZhBuilderPromptGenerator,
20
+ }
21
+ prompt_generator_register(prompts)
22
+
23
+ SYSTEM = 'You are a helpful assistant.'
24
+
25
+ LOGO_TOOL_NAME = 'logo_designer'
26
+
27
+ ANSWER = 'Answer'
28
+ CONFIG = 'Config'
29
+ ASSISTANT_PROMPT = """{}: <answer>\n{}: <config>\nRichConfig: <rich_config>""".format(
30
+ ANSWER, CONFIG)
31
+
32
+ UPDATING_CONFIG_STEP = '🚀Updating Config...'
33
+ CONFIG_UPDATED_STEP = '✅Config Updated!'
34
+ UPDATING_LOGO_STEP = '🚀Updating Logo...'
35
+ LOGO_UPDATED_STEP = '✅Logo Updated!'
36
+
37
+
38
+ def init_builder_chatbot_agent(uuid_str):
39
+ # build model
40
+ builder_cfg, model_cfg, _, _, _, _ = parse_configuration(uuid_str)
41
+
42
+ # additional tool
43
+ additional_tool_list = {LOGO_TOOL_NAME: LogoGeneratorTool()}
44
+ tool_cfg = {LOGO_TOOL_NAME: {'is_remote_tool': True}}
45
+
46
+ # build llm
47
+ logger.info(
48
+ uuid=uuid_str, message=f'using builder model {builder_cfg.model}')
49
+ llm = LLMFactory.build_llm(builder_cfg.model, model_cfg)
50
+ llm.set_agent_type(AgentType.Messages)
51
+
52
+ # build prompt
53
+ # prompt generator
54
+ prompt_generator = 'BuilderPromptGenerator'
55
+ language = builder_cfg.get('language', 'en')
56
+ if language == 'zh':
57
+ prompt_generator = 'ZhBuilderPromptGenerator'
58
+
59
+ # build agent
60
+ agent = BuilderChatbotAgent(
61
+ llm,
62
+ tool_cfg,
63
+ agent_type=AgentType.Messages,
64
+ additional_tool_list=additional_tool_list,
65
+ prompt_generator=prompt_generator,
66
+ uuid=uuid_str)
67
+ agent.set_available_tools([LOGO_TOOL_NAME])
68
+ return agent
69
+
70
+
71
+ class BuilderChatbotAgent(AgentExecutor):
72
+
73
+ def __init__(self, llm, tool_cfg, agent_type, additional_tool_list,
74
+ **kwargs):
75
+
76
+ super().__init__(
77
+ llm,
78
+ tool_cfg,
79
+ agent_type=agent_type,
80
+ additional_tool_list=additional_tool_list,
81
+ tool_retrieval=False,
82
+ **kwargs)
83
+
84
+ # used to reconstruct assistant message when builder config is updated
85
+ self._last_assistant_structured_response = {}
86
+
87
+ def stream_run(self,
88
+ task: str,
89
+ remote: bool = True,
90
+ print_info: bool = False,
91
+ append_files: list = [],
92
+ uuid_str: str = '') -> Dict:
93
+
94
+ # retrieve tools
95
+ tool_list = self.retrieve_tools(task)
96
+ self.prompt_generator.init_prompt(task, tool_list, [])
97
+ function_list = []
98
+
99
+ llm_result, exec_result = '', ''
100
+
101
+ idx = 0
102
+
103
+ while True:
104
+ idx += 1
105
+ llm_artifacts = self.prompt_generator.generate(
106
+ llm_result, exec_result)
107
+ if print_info:
108
+ logger.info(
109
+ uuid=uuid_str,
110
+ message=f'LLM inputs in round {idx}',
111
+ content={'llm_artifacts': llm_artifacts})
112
+
113
+ llm_result = ''
114
+ try:
115
+ parser_obj = AnswerParser()
116
+ for s in self.llm.stream_generate(llm_artifacts=llm_artifacts):
117
+ llm_result += s
118
+ answer, finish = parser_obj.parse_answer(llm_result)
119
+ if answer == '':
120
+ continue
121
+ result = {'llm_text': answer}
122
+ if finish:
123
+ result.update({'step': UPDATING_CONFIG_STEP})
124
+ yield result
125
+
126
+ if print_info:
127
+ logger.info(
128
+ uuid=uuid_str,
129
+ message=f'LLM output in round {idx}',
130
+ content={'llm_result': llm_result})
131
+ except Exception as e:
132
+ yield {'error': 'llm result is not valid'}
133
+
134
+ try:
135
+ re_pattern_config = re.compile(
136
+ pattern=r'Config: ([\s\S]+)\nRichConfig')
137
+ res = re_pattern_config.search(llm_result)
138
+ if res is None:
139
+ return
140
+ config = res.group(1).strip()
141
+ self._last_assistant_structured_response['config_str'] = config
142
+
143
+ rich_config = llm_result[llm_result.rfind('RichConfig:')
144
+ + len('RichConfig:'):].strip()
145
+ try:
146
+ answer = json.loads(rich_config)
147
+ except Exception:
148
+ logger.error(uuid=uuid_str, error='parse RichConfig error')
149
+ return
150
+ self._last_assistant_structured_response[
151
+ 'rich_config_dict'] = answer
152
+ builder_cfg = config_conversion(answer, uuid_str=uuid_str)
153
+ yield {'exec_result': {'result': builder_cfg}}
154
+ yield {'step': CONFIG_UPDATED_STEP}
155
+ except ValueError as e:
156
+ logger.error(uuid=uuid_str, error=str(e))
157
+ yield {'error content=[{}]'.format(llm_result)}
158
+ return
159
+
160
+ # record the llm_result result
161
+ _ = self.prompt_generator.generate(
162
+ {
163
+ 'role': 'assistant',
164
+ 'content': llm_result
165
+ }, '')
166
+
167
+ messages = self.prompt_generator.history
168
+ if 'logo_prompt' in answer and len(messages) > 4 and (
169
+ answer['logo_prompt'] not in messages[-3]['content']):
170
+ # draw logo
171
+ yield {'step': UPDATING_LOGO_STEP}
172
+ params = {
173
+ 'user_requirement': answer['logo_prompt'],
174
+ 'uuid_str': uuid_str
175
+ }
176
+
177
+ tool = self.tool_list[LOGO_TOOL_NAME]
178
+ try:
179
+ exec_result = tool(**params, remote=remote)
180
+ yield {'exec_result': exec_result}
181
+ yield {'step': LOGO_UPDATED_STEP}
182
+
183
+ return
184
+ except Exception as e:
185
+ exec_result = f'Action call error: {LOGO_TOOL_NAME}: {params}. \n Error message: {e}'
186
+ yield {'error': exec_result}
187
+ self.prompt_generator.reset()
188
+ return
189
+ else:
190
+ return
191
+
192
+ def update_config_to_history(self, config: Dict):
193
+ """ update builder config to message when user modify configuration
194
+
195
+ Args:
196
+ config info read from builder config file
197
+ """
198
+ if len(
199
+ self.prompt_generator.history
200
+ ) > 0 and self.prompt_generator.history[-1]['role'] == 'assistant':
201
+ answer = self._last_assistant_structured_response['answer_str']
202
+ simple_config = self._last_assistant_structured_response[
203
+ 'config_str']
204
+
205
+ rich_config_dict = {
206
+ k: config[k]
207
+ for k in ['name', 'description', 'prompt_recommend']
208
+ }
209
+ rich_config_dict[
210
+ 'logo_prompt'] = self._last_assistant_structured_response[
211
+ 'rich_config_dict']['logo_prompt']
212
+ rich_config_dict['instructions'] = config['instruction'].split(';')
213
+
214
+ rich_config = json.dumps(rich_config_dict, ensure_ascii=False)
215
+ new_content = ASSISTANT_PROMPT.replace('<answer>', answer).replace(
216
+ '<config>', simple_config).replace('<rich_config>',
217
+ rich_config)
218
+ self.prompt_generator.history[-1]['content'] = new_content
219
+
220
+
221
+ def beauty_output(response: str, step_result: str):
222
+ flag_list = [
223
+ CONFIG_UPDATED_STEP, UPDATING_CONFIG_STEP, LOGO_UPDATED_STEP,
224
+ UPDATING_LOGO_STEP
225
+ ]
226
+
227
+ if step_result in flag_list:
228
+ end_str = ''
229
+ for item in flag_list:
230
+ if response.endswith(item):
231
+ end_str = item
232
+ if end_str == '':
233
+ response = f'{response}\n{step_result}'
234
+ elif end_str in [CONFIG_UPDATED_STEP, LOGO_UPDATED_STEP]:
235
+ response = f'{response}\n{step_result}'
236
+ else:
237
+ response = response[:-len('\n' + end_str)]
238
+ response = f'{response}\n{step_result}'
239
+
240
+ return response
241
+
242
+
243
+ class AnswerParser(object):
244
+
245
+ def __init__(self):
246
+ self._history = ''
247
+
248
+ def parse_answer(self, llm_result: str):
249
+ finish = False
250
+ answer_prompt = ANSWER + ': '
251
+
252
+ if len(llm_result) >= len(answer_prompt):
253
+ start_pos = llm_result.find(answer_prompt)
254
+ end_pos = llm_result.find(f'\n{CONFIG}')
255
+ if start_pos >= 0:
256
+ if end_pos > start_pos:
257
+ result = llm_result[start_pos + len(answer_prompt):end_pos]
258
+ finish = True
259
+ else:
260
+ result = llm_result[start_pos + len(answer_prompt):]
261
+ else:
262
+ result = llm_result
263
+ else:
264
+ result = ''
265
+
266
+ new_result = result[len(self._history):]
267
+ self._history = result
268
+ return new_result, finish
builder_prompt.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope_agent.prompt import MessagesGenerator
2
+
3
+ SYSTEM = 'You are a helpful assistant.'
4
+
5
+ PROMPT_CUSTOM = """You are now playing the role of an AI assistant (QwenBuilder) for creating an AI character (AI-Agent).
6
+ You need to have a conversation with the user to clarify their requirements for the AI-Agent. Based on existing \
7
+ information and your associative ability, try to fill in the complete configuration file:
8
+
9
+ The configuration file is in JSON format:
10
+ {"name": "... # Name of the AI-Agent", "description": "... # Brief description of the requirements for the AI-Agent", \
11
+ "instructions": "... # Detailed description of specific functional requirements for the AI-Agent, try to be as \
12
+ detailed as possible, type is a string array, starting with []", "prompt_recommend": "... # Recommended commands for \
13
+ the user to say to the AI-Agent, used to guide the user in using the AI-Agent, type is a string array, please add \
14
+ about 4 sentences as much as possible, starting with ["What can you do?"] ", "logo_prompt": "... # Command to draw \
15
+ the logo of the AI-Agent, can be empty if no logo is required or if the logo does not need to be updated, type is \
16
+ string"}
17
+
18
+ In the following conversation, please use the following format strictly when answering, first give the response, then \
19
+ generate the configuration file, do not reply with any other content:
20
+ Answer: ... # What you want to say to the user, ask the user about their requirements for the AI-Agent, do not repeat \
21
+ confirmed requirements from the user, but instead explore new angles to ask the user, try to be detailed and rich, do \
22
+ not leave it blank
23
+ Config: ... # The generated configuration file, strictly follow the above JSON format
24
+ RichConfig: ... # The format and core content are the same as Config, but ensure that name and description are not \
25
+ empty; expand instructions based on Config, making the instructions more detailed, if the user provided detailed \
26
+ instructions, keep them completely; supplement prompt_recommend, ensuring prompt_recommend is recommended commands for \
27
+ the user to say to the AI-Agent. Please describe prompt_recommend, description, and instructions from the perspective \
28
+ of the user.
29
+
30
+ An excellent RichConfig example is as follows:
31
+ {"name": "Xiaohongshu Copywriting Generation Assistant", "description": "A copywriting generation assistant \
32
+ specifically designed for Xiaohongshu users.", "instructions": "1. Understand and respond to user commands; 2. \
33
+ Generate high-quality Xiaohongshu-style copywriting according to user needs; 3. Use emojis to enhance text richness", \
34
+ "prompt_recommend": ["Can you help me generate some copywriting about travel?", "What kind of copywriting can you \
35
+ write?", "Can you recommend a Xiaohongshu copywriting template?" ], "logo_prompt": "A writing assistant logo \
36
+ featuring a feather fountain pen"}
37
+
38
+
39
+ Say "OK." if you understand, do not say anything else."""
40
+
41
+ STARTER_MESSAGE = [{
42
+ 'role': 'system',
43
+ 'content': SYSTEM
44
+ }, {
45
+ 'role': 'user',
46
+ 'content': PROMPT_CUSTOM
47
+ }, {
48
+ 'role': 'assistant',
49
+ 'content': 'OK.'
50
+ }]
51
+
52
+
53
+ class BuilderPromptGenerator(MessagesGenerator):
54
+
55
+ def __init__(self,
56
+ system_template=SYSTEM,
57
+ custom_starter_messages=STARTER_MESSAGE,
58
+ **kwargs):
59
+ super().__init__(
60
+ system_template=system_template,
61
+ custom_starter_messages=custom_starter_messages)
builder_prompt_zh.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from builder_prompt import BuilderPromptGenerator
2
+
3
+ SYSTEM = 'You are a helpful assistant.'
4
+
5
+ PROMPT_CUSTOM = """你现在要扮演一个制造AI角色(AI-Agent)的AI助手(QwenBuilder)。
6
+ 你需要和用户进行对话,明确用户对AI-Agent的要求。并根据已有信息和你的联想能力,尽可能填充完整的配置文件:
7
+
8
+ 配置文件为json格式:
9
+ {"name": "... # AI-Agent的名字", "description": "... # 对AI-Agent的要求,简单描述", "instructions": "... \
10
+ # 分点描述对AI-Agent的具体功能要求,尽量详细一些,类型是一个字符串数组,起始为[]", "prompt_recommend": \
11
+ "... # 推荐的用户将对AI-Agent说的指令,用于指导用户使用AI-Agent,类型是一个字符串数组,请尽可能补充4句左右,\
12
+ 起始为["你可以做什么?"]", "logo_prompt": "... # 画AI-Agent的logo的指令,不需要画logo或不需要更新logo时可以为空,类型是string"}
13
+
14
+ 在接下来的对话中,请在回答时严格使用如下格式,先作出回复,再生成配置文件,不要回复其他任何内容:
15
+ Answer: ... # 你希望对用户说的话,用于询问用户对AI-Agent的要求,不要重复确认用户已经提出的要求,而应该拓展出新的角度来询问用户,尽量细节和丰富,禁止为空
16
+ Config: ... # 生成的配置文件,严格按照以上json格式
17
+ RichConfig: ... # 格式和核心内容和Config相同,但是保证name和description不为空;instructions需要在Config的基础上扩充字数,\
18
+ 使指令更加详尽,如果用户给出了详细指令,请完全保留;补充prompt_recommend,并保证prompt_recommend是推荐的用户将对AI-Agent\
19
+ 说的指令。请注意从用户的视角来描述prompt_recommend、description和instructions。
20
+
21
+ 一个优秀的RichConfig样例如下:
22
+ {"name": "小红书文案生成助手", "description": "一个专为小红书用户设计的文案生成助手。", "instructions": "1. 理解并回应用户的指令;\
23
+ 2. 根据用户的需求生成高质量的小红书风格文案;3. 使用表情提升文本丰富度", "prompt_recommend": ["你可以帮我生成一段关于旅行的文案吗?", \
24
+ "你会写什么样的文案?", "可以推荐一个小红书文案模版吗?"], "logo_prompt": "一个写作助手logo,包含一只羽毛钢笔"}
25
+
26
+
27
+ 明白了请说“好的。”, 不要说其他的。"""
28
+
29
+ STARTER_MESSAGE = [{
30
+ 'role': 'system',
31
+ 'content': SYSTEM
32
+ }, {
33
+ 'role': 'user',
34
+ 'content': PROMPT_CUSTOM
35
+ }, {
36
+ 'role': 'assistant',
37
+ 'content': 'OK.'
38
+ }]
39
+
40
+
41
+ class ZhBuilderPromptGenerator(BuilderPromptGenerator):
42
+
43
+ def __init__(self, custom_starter_messages=STARTER_MESSAGE, **kwargs):
44
+ super().__init__(
45
+ custom_starter_messages=custom_starter_messages, **kwargs)
config/builder_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "",
3
+ "avatar": "custom_bot_avatar.png",
4
+ "description": "",
5
+ "instruction": "",
6
+ "language": "zh",
7
+ "prompt_recommend": [
8
+ "你可以做什么?",
9
+ "你有什么功能?",
10
+ "如何使用你的功能?",
11
+ "能否给我一些示例指令?"
12
+ ],
13
+ "knowledge": [],
14
+ "tools": {
15
+ "image_gen": {
16
+ "name": "Wanx Image Generation",
17
+ "is_active": true,
18
+ "use": true
19
+ },
20
+ "code_interpreter": {
21
+ "name": "Code Interpreter",
22
+ "is_active": true,
23
+ "use": false
24
+ }
25
+ },
26
+ "model": "qwen-max"
27
+ }
config/builder_config_ci.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "Python数据分析师",
3
+ "avatar": "image.png",
4
+ "description": "使用python解决任务时,你可以运行代码并得到结果,如果运行结果有错误,你需要尽可能对代码进行改进。你可以处理用户上传到电脑的文件。",
5
+ "instruction": "1. 你会数学解题;\n2. 你会数据分析和可视化;\n3. 用户上传文件时,你必须先了解文件结构再进行下一步操作;如果没有上传文件但要求画图,则编造示例数据画图\n4. 调用工具前你需要说明理由;Think step by step\n5. 代码出错时你需要反思并改进",
6
+ "prompt_recommend": [
7
+ "制作示例饼图来报告某网站流量来源。",
8
+ "鸡兔同笼 32头 88腿 多少兔",
9
+ "帮我把这个链接“https://modelscope.cn/my/overview”网址,转成二维码,并展示图片",
10
+ "一支钢笔5元,一支铅笔3元,一个文具盒10元,一套文具包括2支钢笔,3支铅笔,1个文具盒,一共多少钱?"
11
+ ],
12
+ "knowledge": [],
13
+ "tools": {
14
+ "image_gen": {
15
+ "name": "Wanx Image Generation",
16
+ "is_active": true,
17
+ "use": false
18
+ },
19
+ "code_interpreter": {
20
+ "name": "Code Interpreter",
21
+ "is_active": true,
22
+ "use": true
23
+ },
24
+ "amap_weather": {
25
+ "name": "高德天气",
26
+ "is_active": true,
27
+ "use": false
28
+ }
29
+ },
30
+ "model": "qwen-max"
31
+ }
config/builder_config_template.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "AI-Agent",
3
+ "avatar": "logo.png",
4
+ "description": "我希望AI-Agent能够像多啦A梦一样,拥有各种神奇的技能和能力,可以帮我解决生活中的各种问题。",
5
+ "instruction": "请告诉我你想要什么帮助,我会尽力提供解决方案。;如果你有任何问题,请随时向我提问,我会尽我所能回答你的问题。;我可以帮你查找信息、提供建议、提醒日程等,只需要你告诉我你需要什么。",
6
+ "prompt_recommend": [
7
+ "你好,我是AI-Agent,有什么可以帮助你的吗?",
8
+ "嗨,很高兴见到你,我是AI-Agent,你可以问我任何问题。",
9
+ "你好,我是AI-Agent,需要我帮你做些什么吗?",
10
+ "嗨,我是AI-Agent,有什么我可以帮到你的吗?"
11
+ ],
12
+ "knowledge": [],
13
+ "tools": {
14
+ "image_gen": {
15
+ "name": "Wanx Image Generation",
16
+ "is_active": true,
17
+ "use": true
18
+ },
19
+ "code_interpreter": {
20
+ "name": "Code Interpreter",
21
+ "is_active": true,
22
+ "use": false
23
+ }
24
+ },
25
+ "model": "qwen-max"
26
+ }
config/builder_config_wuxia.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "武侠小说家",
3
+ "avatar": "custom_bot_avatar.png",
4
+ "description": "能够生成武侠小说并配图",
5
+ "instruction": "你的指令是为我提供一个基于金庸武侠小说世界的在线RPG游戏体验。在这个游戏中,玩家将扮演金庸故事中的一个关键角色,游戏情景将基于他的小说。这个游戏的玩法是互动式的,并遵循以下特定格式:\n\n<场景描述>:根据玩家的选择,故事情节将按照金庸小说的线索发展。你将描述角色所处的环境和情况。\n\n<场景图片>:对于每个场景,你将创造一个概括该情况的图像。这些图像的风格将类似于1980年代RPG游戏,大小是16:9宽屏比例。在这个步骤你需要调用画图工具,绘制<场景描述>。\n\n<选择>:在每次互动中,你将为玩家提供三个行动选项,分别标为A、B、C,以及第四个选项“D: 输入玩家的选择”。故事情节将根据玩家选择的行动进展。如果一个选择不是直接来自小说,你将创造性地适应故事,最终引导它回归原始情节。\n\n整个故事将围绕金庸小说中丰富而复杂的世界展开。每次互动必须包括<场景描述>、<场景图片>和<选择>。所有内容将以繁体中文呈现。你的重点将仅仅放在提供场景描述,场景图片和选择上,不包含其他游戏指导。场景尽量不要重复,要丰富一些。",
6
+ "prompt_recommend": [
7
+ "扮演小龙女",
8
+ "扮演杨过"
9
+ ],
10
+ "knowledge": [],
11
+ "tools": {
12
+ "image_gen": {
13
+ "name": "Wanx Image Generation",
14
+ "is_active": true,
15
+ "use": true
16
+ },
17
+ "code_interpreter": {
18
+ "name": "Code Interpreter",
19
+ "is_active": true,
20
+ "use": false
21
+ }
22
+ },
23
+ "model": "qwen-max"
24
+ }
config/custom_bot_avatar.png ADDED
config/local_user/builder_config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "Python数据分析师",
3
+ "avatar": "image.png",
4
+ "description": "使用python解决任务时,你可以运行代码并得到结果,如果运行结果有错误,你需要尽可能对代码进行改进。你可以处理用户上传到电脑的文件。",
5
+ "instruction": "1. 你会数学解题;\n2. 你会数据分析和可视化;\n3. 用户上传文件时,你必须先了解文件结构再进行下一步操作;如果没有上传文件但要求画图,则编造示例数据画图\n4. 调用工具前你需要说明理由;Think step by step\n5. 代码出错时你需要反思并改进",
6
+ "prompt_recommend": [
7
+ "制作示例饼图来报告某网站流量来源。",
8
+ "你会做什么菜?",
9
+ "可以推荐一部好看的电影吗?"
10
+ ],
11
+ "knowledge": [],
12
+ "tools": {
13
+ "image_gen": {
14
+ "name": "Wanx Image Generation",
15
+ "is_active": true,
16
+ "use": false
17
+ },
18
+ "code_interpreter": {
19
+ "name": "Code Interpreter",
20
+ "is_active": true,
21
+ "use": true
22
+ },
23
+ "amap_weather": {
24
+ "name": "高德天气",
25
+ "is_active": true,
26
+ "use": false
27
+ },
28
+ "wordart_texture_generation": {
29
+ "name": "艺术字纹理生成",
30
+ "is_active": true,
31
+ "use": false
32
+ }
33
+ },
34
+ "model": "qwen-max"
35
+ }
config/local_user/custom_bot_avatar.png ADDED
config/local_user/image.png ADDED
config/model_config.json ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "qwen-turbo": {
3
+ "type": "dashscope",
4
+ "model": "qwen-turbo",
5
+ "generate_cfg": {
6
+ "use_raw_prompt": true,
7
+ "top_p": 0.8
8
+ }
9
+ },
10
+ "qwen-plus": {
11
+ "type": "dashscope",
12
+ "model": "qwen-plus",
13
+ "generate_cfg": {
14
+ "use_raw_prompt": true,
15
+ "top_p": 0.8
16
+ }
17
+ },
18
+ "qwen-max": {
19
+ "type": "dashscope",
20
+ "model": "qwen-max",
21
+ "length_constraint": {
22
+ "knowledge": 4000,
23
+ "input": 6000
24
+ },
25
+ "generate_cfg": {
26
+ "use_raw_prompt": true,
27
+ "top_p": 0.8
28
+ }
29
+ },
30
+ "qwen-max-longcontext": {
31
+ "type": "dashscope",
32
+ "model": "qwen-max-longcontext",
33
+ "length_constraint": {
34
+ "knowledge": 28000,
35
+ "input": 30000
36
+ },
37
+ "generate_cfg": {
38
+ "use_raw_prompt": true,
39
+ "top_p": 0.8
40
+ }
41
+ },
42
+ "qwen-7b": {
43
+ "type": "modelscope",
44
+ "model_id": "qwen/Qwen-7B-Chat",
45
+ "model_revision": "v1.1.8",
46
+ "generate_cfg": {
47
+ "use_raw_prompt": true,
48
+ "top_p": 0.8,
49
+ "max_length": 2000
50
+ }
51
+ },
52
+ "qwen-7b-api": {
53
+ "type": "dashscope",
54
+ "model": "qwen-7b-chat",
55
+ "generate_cfg": {
56
+ "use_raw_prompt": true,
57
+ "top_p": 0.8,
58
+ "debug": false
59
+ }
60
+ },
61
+ "qwen-14b": {
62
+ "type": "modelscope",
63
+ "model_id": "qwen/Qwen-14B-Chat",
64
+ "model_revision": "v1.0.8",
65
+ "generate_cfg": {
66
+ "use_raw_prompt": true,
67
+ "top_p": 0.8,
68
+ "max_length": 2000
69
+ }
70
+ },
71
+ "qwen-14b-api": {
72
+ "type": "dashscope",
73
+ "model": "qwen-14b-chat",
74
+ "generate_cfg": {
75
+ "use_raw_prompt": true,
76
+ "top_p": 0.8,
77
+ "debug": false
78
+ }
79
+ },
80
+ "qwen-72b-api": {
81
+ "type": "dashscope",
82
+ "model": "qwen-72b-chat",
83
+ "generate_cfg": {
84
+ "use_raw_prompt": true,
85
+ "top_p": 0.8,
86
+ "debug": false
87
+ }
88
+ }
89
+ }
config/tool_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_gen": {
3
+ "name": "Wanx Image Generation",
4
+ "is_active": true,
5
+ "use": true,
6
+ "is_remote_tool": true
7
+ },
8
+ "code_interpreter": {
9
+ "name": "Code Interpreter",
10
+ "is_active": true,
11
+ "use": false,
12
+ "is_remote_tool": false,
13
+ "max_output": 2000
14
+ },
15
+ "web_browser": {
16
+ "name": "Web Browsing",
17
+ "is_active": true,
18
+ "use": false,
19
+ "max_browser_length": 2000
20
+ },
21
+ "amap_weather": {
22
+ "name": "高德天气",
23
+ "is_active": true,
24
+ "use": false
25
+ },
26
+ "wordart_texture_generation": {
27
+ "name": "艺术字纹理生成",
28
+ "is_active": true,
29
+ "use": false
30
+ },
31
+ "web_search": {
32
+ "name": "Web Searching",
33
+ "is_active": true,
34
+ "use": false,
35
+ "searcher": "bing"
36
+ },
37
+ "qwen_vl": {
38
+ "name": "Qwen-VL识图",
39
+ "is_active": true,
40
+ "use": false
41
+ },
42
+ "style_repaint": {
43
+ "name": "人物风格重绘",
44
+ "is_active": true,
45
+ "use": false
46
+ }
47
+ }
config_utils.py ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+ import traceback
4
+
5
+ import json
6
+ from modelscope_agent.tools.openapi_plugin import openapi_schema_convert
7
+ from modelscope_agent.utils.logger import agent_logger as logger
8
+
9
+ from modelscope.utils.config import Config
10
+
11
+ DEFAULT_AGENT_DIR = '/tmp/agentfabric'
12
+ DEFAULT_BUILDER_CONFIG_DIR = os.path.join(DEFAULT_AGENT_DIR, 'config')
13
+ DEFAULT_BUILDER_CONFIG_FILE = os.path.join(DEFAULT_BUILDER_CONFIG_DIR,
14
+ 'builder_config.json')
15
+ DEFAULT_OPENAPI_PLUGIN_CONFIG_FILE = os.path.join(
16
+ DEFAULT_BUILDER_CONFIG_DIR, 'openapi_plugin_config.json')
17
+ DEFAULT_MODEL_CONFIG_FILE = './config/model_config.json'
18
+ DEFAULT_TOOL_CONFIG_FILE = './config/tool_config.json'
19
+ DEFAULT_CODE_INTERPRETER_DIR = os.getenv('CODE_INTERPRETER_WORK_DIR',
20
+ '/tmp/ci_workspace')
21
+
22
+
23
+ def get_user_dir(uuid_str=''):
24
+ return os.path.join(DEFAULT_BUILDER_CONFIG_DIR, uuid_str)
25
+
26
+
27
+ def get_ci_dir():
28
+ return DEFAULT_CODE_INTERPRETER_DIR
29
+
30
+
31
+ def get_user_cfg_file(uuid_str=''):
32
+ builder_cfg_file = os.getenv('BUILDER_CONFIG_FILE',
33
+ DEFAULT_BUILDER_CONFIG_FILE)
34
+ # convert from ./config/builder_config.json to ./config/user/builder_config.json
35
+ builder_cfg_file = builder_cfg_file.replace('config/', 'config/user/')
36
+
37
+ # convert from ./config/user/builder_config.json to ./config/uuid/builder_config.json
38
+ if uuid_str != '':
39
+ builder_cfg_file = builder_cfg_file.replace('user', uuid_str)
40
+ return builder_cfg_file
41
+
42
+
43
+ def get_user_openapi_plugin_cfg_file(uuid_str=''):
44
+ openapi_plugin_cfg_file = os.getenv('OPENAPI_PLUGIN_CONFIG_FILE',
45
+ DEFAULT_OPENAPI_PLUGIN_CONFIG_FILE)
46
+ openapi_plugin_cfg_file = openapi_plugin_cfg_file.replace(
47
+ 'config/', 'config/user/')
48
+ if uuid_str != '':
49
+ openapi_plugin_cfg_file = openapi_plugin_cfg_file.replace(
50
+ 'user', uuid_str)
51
+ return openapi_plugin_cfg_file
52
+
53
+
54
+ def save_builder_configuration(builder_cfg, uuid_str=''):
55
+ builder_cfg_file = get_user_cfg_file(uuid_str)
56
+ if uuid_str != '' and not os.path.exists(
57
+ os.path.dirname(builder_cfg_file)):
58
+ os.makedirs(os.path.dirname(builder_cfg_file))
59
+ with open(builder_cfg_file, 'w', encoding='utf-8') as f:
60
+ f.write(json.dumps(builder_cfg, indent=2, ensure_ascii=False))
61
+
62
+
63
+ def is_valid_plugin_configuration(openapi_plugin_cfg):
64
+ if 'schema' in openapi_plugin_cfg:
65
+ schema = openapi_plugin_cfg['schema']
66
+ if isinstance(schema, dict):
67
+ return True
68
+ else:
69
+ return False
70
+
71
+
72
+ def save_plugin_configuration(openapi_plugin_cfg, uuid_str):
73
+ openapi_plugin_cfg_file = get_user_openapi_plugin_cfg_file(uuid_str)
74
+ if uuid_str != '' and not os.path.exists(
75
+ os.path.dirname(openapi_plugin_cfg_file)):
76
+ os.makedirs(os.path.dirname(openapi_plugin_cfg_file))
77
+ with open(openapi_plugin_cfg_file, 'w', encoding='utf-8') as f:
78
+ f.write(json.dumps(openapi_plugin_cfg, indent=2, ensure_ascii=False))
79
+
80
+
81
+ def get_avatar_image(bot_avatar, uuid_str=''):
82
+ user_avatar_path = os.path.join(
83
+ os.path.dirname(__file__), 'assets/user.jpg')
84
+ bot_avatar_path = os.path.join(os.path.dirname(__file__), 'assets/bot.jpg')
85
+ if len(bot_avatar) > 0:
86
+ bot_avatar_path = os.path.join(DEFAULT_BUILDER_CONFIG_DIR, uuid_str,
87
+ bot_avatar)
88
+ if uuid_str != '':
89
+ # use default if not exists
90
+ if not os.path.exists(bot_avatar_path):
91
+ # create parents directory
92
+ os.makedirs(os.path.dirname(bot_avatar_path), exist_ok=True)
93
+ # copy the template to the address
94
+ temp_bot_avatar_path = os.path.join(DEFAULT_BUILDER_CONFIG_DIR,
95
+ bot_avatar)
96
+ if not os.path.exists(temp_bot_avatar_path):
97
+ # fall back to default local avatar image
98
+ temp_bot_avatar_path = os.path.join('./config', bot_avatar)
99
+ if not os.path.exists(temp_bot_avatar_path):
100
+ temp_bot_avatar_path = os.path.join(
101
+ './config', 'custom_bot_avatar.png')
102
+
103
+ shutil.copy(temp_bot_avatar_path, bot_avatar_path)
104
+
105
+ return [user_avatar_path, bot_avatar_path]
106
+
107
+
108
+ def save_avatar_image(image_path, uuid_str=''):
109
+ bot_avatar = os.path.basename(image_path)
110
+ bot_avatar_path = os.path.join(DEFAULT_BUILDER_CONFIG_DIR, uuid_str,
111
+ bot_avatar)
112
+ shutil.copy(image_path, bot_avatar_path)
113
+ return bot_avatar, bot_avatar_path
114
+
115
+
116
+ def parse_configuration(uuid_str=''):
117
+ """parse configuration
118
+
119
+ Args:
120
+
121
+ Returns:
122
+ dict: parsed configuration
123
+
124
+ """
125
+ model_cfg_file = os.getenv('MODEL_CONFIG_FILE', DEFAULT_MODEL_CONFIG_FILE)
126
+
127
+ builder_cfg_file = get_user_cfg_file(uuid_str)
128
+ # use default if not exists
129
+ if not os.path.exists(builder_cfg_file):
130
+ # create parents directory
131
+ os.makedirs(os.path.dirname(builder_cfg_file), exist_ok=True)
132
+ # copy the template to the address
133
+ builder_cfg_file_temp = './config/builder_config.json'
134
+
135
+ if builder_cfg_file_temp != builder_cfg_file:
136
+ shutil.copy(builder_cfg_file_temp, builder_cfg_file)
137
+
138
+ tool_cfg_file = os.getenv('TOOL_CONFIG_FILE', DEFAULT_TOOL_CONFIG_FILE)
139
+
140
+ builder_cfg = Config.from_file(builder_cfg_file)
141
+ model_cfg = Config.from_file(model_cfg_file)
142
+ tool_cfg = Config.from_file(tool_cfg_file)
143
+
144
+ tools_info = builder_cfg.tools
145
+ available_tool_list = []
146
+ for key, value in tools_info.items():
147
+ if value['use']:
148
+ available_tool_list.append(key)
149
+ tool_cfg[key]['use'] = value['use']
150
+
151
+ openapi_plugin_file = get_user_openapi_plugin_cfg_file(uuid_str)
152
+ plugin_cfg = {}
153
+ available_plugin_list = []
154
+ if os.path.exists(openapi_plugin_file):
155
+ openapi_plugin_cfg = Config.from_file(openapi_plugin_file)
156
+ try:
157
+ config_dict = openapi_schema_convert(
158
+ schema=openapi_plugin_cfg.schema,
159
+ auth=openapi_plugin_cfg.auth.to_dict())
160
+ plugin_cfg = Config(config_dict)
161
+ for name, config in config_dict.items():
162
+ available_plugin_list.append(name)
163
+ except Exception as e:
164
+ logger.error(
165
+ uuid=uuid_str,
166
+ error=str(e),
167
+ content={
168
+ 'error_traceback':
169
+ traceback.format_exc(),
170
+ 'error_details':
171
+ 'The format of the plugin config file is incorrect.'
172
+ })
173
+
174
+ return builder_cfg, model_cfg, tool_cfg, available_tool_list, plugin_cfg, available_plugin_list
custom_prompt.py ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import os
3
+ import re
4
+
5
+ import json
6
+ from config_utils import get_user_cfg_file
7
+ from modelscope_agent.prompt.prompt import (KNOWLEDGE_INTRODUCTION_PROMPT,
8
+ KNOWLEDGE_PROMPT, LengthConstraint,
9
+ PromptGenerator, build_raw_prompt)
10
+
11
+ from modelscope.utils.config import Config
12
+
13
+ DEFAULT_SYSTEM_TEMPLATE = """
14
+
15
+ # Tools
16
+
17
+ ## You have the following tools:
18
+
19
+ <tool_list>
20
+
21
+ ## When you need to call a tool, please intersperse the following tool command in your reply. %s
22
+
23
+ Tool Invocation
24
+ Action: The name of the tool, must be one of <tool_name_list>
25
+ Action Input: Tool input
26
+ Observation: <result>Tool returns result</result>
27
+ Answer: Summarize the results of this tool call based on Observation. If the result contains url, please do not show it.
28
+
29
+ ```
30
+ [Link](url)
31
+ ```
32
+
33
+ # Instructions
34
+ """ % 'You can call zero or more times according to your needs:'
35
+
36
+ DEFAULT_SYSTEM_TEMPLATE_WITHOUT_TOOL = """
37
+
38
+ # Instructions
39
+ """
40
+
41
+ DEFAULT_INSTRUCTION_TEMPLATE = ''
42
+
43
+ DEFAULT_USER_TEMPLATE = (
44
+ '(You are playing as <role_name>, you can use tools: <tool_name_list><knowledge_note>)<file_names><user_input>'
45
+ )
46
+
47
+ DEFAULT_USER_TEMPLATE_WITHOUT_TOOL = """(You are playing as <role_name><knowledge_note>) <file_names><user_input>"""
48
+
49
+ DEFAULT_EXEC_TEMPLATE = """Observation: <result><exec_result></result>\nAnswer:"""
50
+
51
+ TOOL_DESC = (
52
+ '{name_for_model}: {name_for_human} API. {description_for_model} Input parameters: {parameters}'
53
+ )
54
+
55
+
56
+ class CustomPromptGenerator(PromptGenerator):
57
+
58
+ def __init__(
59
+ self,
60
+ system_template=DEFAULT_SYSTEM_TEMPLATE,
61
+ instruction_template=DEFAULT_INSTRUCTION_TEMPLATE,
62
+ user_template=DEFAULT_USER_TEMPLATE,
63
+ exec_template=DEFAULT_EXEC_TEMPLATE,
64
+ assistant_template='',
65
+ sep='\n\n',
66
+ llm=None,
67
+ length_constraint=LengthConstraint(),
68
+ tool_desc=TOOL_DESC,
69
+ default_user_template_without_tool=DEFAULT_USER_TEMPLATE_WITHOUT_TOOL,
70
+ default_system_template_without_tool=DEFAULT_SYSTEM_TEMPLATE_WITHOUT_TOOL,
71
+ addition_assistant_reply='OK.',
72
+ **kwargs):
73
+
74
+ # hack here for special prompt, such as add an addition round before user input
75
+ self.add_addition_round = kwargs.get('add_addition_round', False)
76
+ self.addition_assistant_reply = addition_assistant_reply
77
+ builder_cfg_file = get_user_cfg_file(
78
+ uuid_str=kwargs.get('uuid_str', ''))
79
+ builder_cfg = Config.from_file(builder_cfg_file)
80
+ self.builder_cfg = builder_cfg
81
+ self.knowledge_file_name = kwargs.get('knowledge_file_name', '')
82
+ if not len(instruction_template):
83
+ instruction_template = self._parse_role_config(builder_cfg)
84
+
85
+ self.llm = llm
86
+ self.prompt_preprocessor = build_raw_prompt(llm.model_id)
87
+ self.length_constraint = length_constraint
88
+ self._parse_length_restriction()
89
+
90
+ self.tool_desc = tool_desc
91
+ self.default_user_template_without_tool = default_user_template_without_tool
92
+ self.default_system_template_without_tool = default_system_template_without_tool
93
+
94
+ super().__init__(
95
+ system_template=system_template,
96
+ instruction_template=instruction_template,
97
+ user_template=user_template,
98
+ exec_template=exec_template,
99
+ assistant_template=assistant_template,
100
+ sep=sep,
101
+ llm=llm,
102
+ length_constraint=length_constraint)
103
+
104
+ def _parse_role_config(self, config: dict):
105
+ prompt = 'You are playing as an AI-Agent, '
106
+
107
+ # concat prompt
108
+ if 'name' in config and config['name']:
109
+ prompt += ('Your name is ' + config['name'] + '.')
110
+ if 'description' in config and config['description']:
111
+ prompt += config['description']
112
+ prompt += '\nYou have the following specific functions:'
113
+
114
+ if 'instruction' in config and config['instruction']:
115
+ if isinstance(config['instruction'], list):
116
+ for ins in config['instruction']:
117
+ prompt += ins
118
+ prompt += ';'
119
+ elif isinstance(config['instruction'], str):
120
+ prompt += config['instruction']
121
+ if prompt[-1] == ';':
122
+ prompt = prompt[:-1]
123
+
124
+ prompt += '\nNow you will start playing as'
125
+ if 'name' in config and config['name']:
126
+ prompt += config['name']
127
+ prompt += ', say "OK." if you understand, do not say anything else.'
128
+
129
+ return prompt
130
+
131
+ def _parse_length_restriction(self):
132
+ constraint = self.llm.cfg.get('length_constraint', None)
133
+ # if isinstance(constraint, Config):
134
+ # constraint = constraint.to_dict()
135
+ self.length_constraint.update(constraint)
136
+
137
+ def _update_user_prompt_without_knowledge(self, task, tool_list, **kwargs):
138
+ if len(tool_list) > 0:
139
+ # user input
140
+ user_input = self.user_template.replace('<role_name>',
141
+ self.builder_cfg.name)
142
+ user_input = user_input.replace(
143
+ '<tool_name_list>',
144
+ ','.join([tool.name for tool in tool_list]))
145
+ else:
146
+ self.user_template = self.default_user_template_without_tool
147
+ user_input = self.user_template.replace('<user_input>', task)
148
+ user_input = user_input.replace('<role_name>',
149
+ self.builder_cfg.name)
150
+
151
+ user_input = user_input.replace('<user_input>', task)
152
+
153
+ if 'append_files' in kwargs:
154
+ append_files = kwargs.get('append_files', [])
155
+
156
+ # remove all files that should add to knowledge
157
+ # exclude_extensions = {".txt", ".md", ".pdf"}
158
+ # filtered_files = [file for file in append_files if
159
+ # not any(file.endswith(ext) for ext in exclude_extensions)]
160
+
161
+ if len(append_files) > 0:
162
+ file_names = ','.join(
163
+ [os.path.basename(path) for path in append_files])
164
+ user_input = user_input.replace('<file_names>',
165
+ f'[上传文件{file_names}]')
166
+ else:
167
+ user_input = user_input.replace('<file_names>', '')
168
+ else:
169
+ user_input = user_input.replace('<file_names>', '')
170
+
171
+ return user_input
172
+
173
+ def _get_knowledge_template(self):
174
+ return '. Please read the knowledge base at the beginning.'
175
+
176
+ def init_prompt(self, task, tool_list, knowledge_list, **kwargs):
177
+
178
+ if len(self.history) == 0:
179
+
180
+ self.history.append({
181
+ 'role': 'system',
182
+ 'content': 'You are a helpful assistant.'
183
+ })
184
+
185
+ if len(tool_list) > 0:
186
+ prompt = f'{self.system_template}\n{self.instruction_template}'
187
+
188
+ # get tool description str
189
+ tool_str = self.get_tool_str(tool_list)
190
+ prompt = prompt.replace('<tool_list>', tool_str)
191
+
192
+ tool_name_str = self.get_tool_name_str(tool_list)
193
+ prompt = prompt.replace('<tool_name_list>', tool_name_str)
194
+ else:
195
+ self.system_template = self.default_system_template_without_tool
196
+ prompt = f'{self.system_template}\n{self.instruction_template}'
197
+
198
+ user_input = self._update_user_prompt_without_knowledge(
199
+ task, tool_list, **kwargs)
200
+
201
+ if len(knowledge_list) > 0:
202
+ user_input = user_input.replace('<knowledge_note>',
203
+ self._get_knowledge_template())
204
+ else:
205
+ user_input = user_input.replace('<knowledge_note>', '')
206
+
207
+ self.system_prompt = copy.deepcopy(prompt)
208
+
209
+ # build history
210
+ if self.add_addition_round:
211
+ self.history.append({
212
+ 'role': 'user',
213
+ 'content': self.system_prompt
214
+ })
215
+ self.history.append({
216
+ 'role': 'assistant',
217
+ 'content': self.addition_assistant_reply
218
+ })
219
+ self.history.append({'role': 'user', 'content': user_input})
220
+ self.history.append({
221
+ 'role': 'assistant',
222
+ 'content': self.assistant_template
223
+ })
224
+ else:
225
+ self.history.append({
226
+ 'role': 'user',
227
+ 'content': self.system_prompt + user_input
228
+ })
229
+ self.history.append({
230
+ 'role': 'assistant',
231
+ 'content': self.assistant_template
232
+ })
233
+
234
+ self.function_calls = self.get_function_list(tool_list)
235
+ else:
236
+ user_input = self._update_user_prompt_without_knowledge(
237
+ task, tool_list, **kwargs)
238
+ if len(knowledge_list) > 0:
239
+ user_input = user_input.replace('<knowledge_note>',
240
+ self._get_knowledge_template())
241
+ else:
242
+ user_input = user_input.replace('<knowledge_note>', '')
243
+
244
+ self.history.append({'role': 'user', 'content': user_input})
245
+ self.history.append({
246
+ 'role': 'assistant',
247
+ 'content': self.assistant_template
248
+ })
249
+
250
+ if len(knowledge_list) > 0:
251
+ knowledge_str = self.get_knowledge_str(
252
+ knowledge_list,
253
+ file_name=self.knowledge_file_name,
254
+ only_content=True)
255
+ self.update_knowledge_str(knowledge_str)
256
+
257
+ def _get_tool_template(self):
258
+ return '\n\n# Tools\n\n'
259
+
260
+ def update_knowledge_str(self, knowledge_str):
261
+ """If knowledge base information was not used previously, it will be added;
262
+ if knowledge base information was previously used, it will be replaced.
263
+
264
+ Args:
265
+ knowledge_str (str): knowledge str generated by get_knowledge_str
266
+ """
267
+ knowledge_introduction = KNOWLEDGE_INTRODUCTION_PROMPT.replace(
268
+ '<file_name>', self.knowledge_file_name)
269
+ if len(knowledge_str) > self.length_constraint.knowledge:
270
+ # todo: use tokenizer to constrain length
271
+ knowledge_str = knowledge_str[-self.length_constraint.knowledge:]
272
+ knowledge_str = f'{KNOWLEDGE_PROMPT}{self.sep}{knowledge_introduction}{self.sep}{knowledge_str}'
273
+
274
+ for i in range(0, len(self.history)):
275
+ if self.history[i]['role'] == 'user':
276
+ content: str = self.history[i]['content']
277
+ start_pos = content.find(f'{KNOWLEDGE_PROMPT}{self.sep}')
278
+ end_pos = content.rfind(self._get_tool_template())
279
+ if start_pos >= 0 and end_pos >= 0: # replace knowledge
280
+
281
+ self.history[i]['content'] = content[
282
+ 0:start_pos] + knowledge_str + content[end_pos:]
283
+ break
284
+ elif start_pos < 0 and end_pos == 0: # add knowledge
285
+ self.history[i]['content'] = knowledge_str + content
286
+ break
287
+ else:
288
+ continue
289
+
290
+ def get_tool_str(self, tool_list):
291
+ tool_texts = []
292
+ for tool in tool_list:
293
+ tool_texts.append(
294
+ self.tool_desc.format(
295
+ name_for_model=tool.name,
296
+ name_for_human=tool.name,
297
+ description_for_model=tool.description,
298
+ parameters=json.dumps(tool.parameters,
299
+ ensure_ascii=False)))
300
+ # + ' ' + FORMAT_DESC['json'])
301
+ tool_str = '\n\n'.join(tool_texts)
302
+ return tool_str
303
+
304
+ def get_tool_name_str(self, tool_list):
305
+ tool_name = []
306
+ for tool in tool_list:
307
+ tool_name.append(tool.name)
308
+
309
+ tool_name_str = json.dumps(tool_name, ensure_ascii=False)
310
+ return tool_name_str
311
+
312
+ def _generate(self, llm_result, exec_result: str):
313
+ """
314
+ generate next round prompt based on previous llm_result and exec_result and update history
315
+ """
316
+ if len(llm_result) != 0:
317
+ self.history[-1]['content'] += f'{llm_result}'
318
+ if len(exec_result) != 0:
319
+ # handle image markdown wrapper
320
+ image_markdown_re = re.compile(
321
+ pattern=r'!\[IMAGEGEN\]\(([\s\S]+)\)')
322
+ match = image_markdown_re.search(exec_result)
323
+ if match is not None:
324
+ exec_result = match.group(1).rstrip()
325
+ exec_result = self.exec_template.replace('<exec_result>',
326
+ str(exec_result))
327
+ self.history[-1]['content'] += exec_result
328
+
329
+ # generate plate prompt here
330
+ self.prompt = self.prompt_preprocessor(self.history)
331
+ return self.prompt
custom_prompt_zh.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from custom_prompt import CustomPromptGenerator
2
+
3
+ DEFAULT_SYSTEM_TEMPLATE = """
4
+
5
+ # 工具
6
+
7
+ ## 你拥有如下工具:
8
+
9
+ <tool_list>
10
+
11
+ ## 当你需要调用工具时,请在你的回复中穿插如下的工具调用命令,可以根据需求调用零次或多次:
12
+
13
+ 工具调用
14
+ Action: 工具的名称,必须是<tool_name_list>之一
15
+ Action Input: 工具的输入
16
+ Observation: <result>工具返回的结果</result>
17
+ Answer: 根据Observation总结本次工具调用返回的结果,如果结果中出现url,请不要展示出。
18
+
19
+ ```
20
+ [链接](url)
21
+ ```
22
+
23
+ # 指令
24
+ """
25
+
26
+ DEFAULT_SYSTEM_TEMPLATE_WITHOUT_TOOL = """
27
+
28
+ # 指令
29
+ """
30
+
31
+ DEFAULT_INSTRUCTION_TEMPLATE = ''
32
+
33
+ DEFAULT_USER_TEMPLATE = (
34
+ """(你正在扮演<role_name>,你可以使用工具:<tool_name_list><knowledge_note>)<file_names><user_input>"""
35
+ )
36
+
37
+ DEFAULT_USER_TEMPLATE_WITHOUT_TOOL = """(你正在扮演<role_name><knowledge_note>) <file_names><user_input>"""
38
+
39
+ DEFAULT_EXEC_TEMPLATE = """Observation: <result><exec_result></result>\nAnswer:"""
40
+
41
+ TOOL_DESC = (
42
+ '{name_for_model}: {name_for_human} API。 {description_for_model} 输入参数: {parameters}'
43
+ )
44
+
45
+
46
+ class ZhCustomPromptGenerator(CustomPromptGenerator):
47
+
48
+ def __init__(
49
+ self,
50
+ system_template=DEFAULT_SYSTEM_TEMPLATE,
51
+ instruction_template=DEFAULT_INSTRUCTION_TEMPLATE,
52
+ user_template=DEFAULT_USER_TEMPLATE,
53
+ exec_template=DEFAULT_EXEC_TEMPLATE,
54
+ tool_desc=TOOL_DESC,
55
+ default_user_template_without_tool=DEFAULT_USER_TEMPLATE_WITHOUT_TOOL,
56
+ default_system_template_without_tool=DEFAULT_SYSTEM_TEMPLATE_WITHOUT_TOOL,
57
+ addition_assistant_reply='好的。',
58
+ **kwargs):
59
+ super().__init__(
60
+ system_template=system_template,
61
+ instruction_template=instruction_template,
62
+ user_template=user_template,
63
+ exec_template=exec_template,
64
+ tool_desc=tool_desc,
65
+ default_user_template_without_tool=
66
+ default_user_template_without_tool,
67
+ default_system_template_without_tool=
68
+ default_system_template_without_tool,
69
+ **kwargs)
70
+
71
+ def _parse_role_config(self, config: dict):
72
+ prompt = '你扮演AI-Agent,'
73
+
74
+ # concat prompt
75
+ if 'name' in config and config['name']:
76
+ prompt += ('你的名字是' + config['name'] + '。')
77
+ if 'description' in config and config['description']:
78
+ prompt += config['description']
79
+ prompt += '\n你具有下列具体功能:'
80
+
81
+ if 'instruction' in config and config['instruction']:
82
+ if isinstance(config['instruction'], list):
83
+ for ins in config['instruction']:
84
+ prompt += ins
85
+ prompt += ';'
86
+ elif isinstance(config['instruction'], str):
87
+ prompt += config['instruction']
88
+ if prompt[-1] == ';':
89
+ prompt = prompt[:-1]
90
+
91
+ prompt += '\n下面你将开始扮演'
92
+ if 'name' in config and config['name']:
93
+ prompt += config['name']
94
+ prompt += ',明白了请说“好的。”,不要说其他的。'
95
+
96
+ return prompt
97
+
98
+ def _get_tool_template(self):
99
+ return '\n\n# 工具\n\n'
100
+
101
+ def _get_knowledge_template(self):
102
+ return '。请查看前面的知识库'
gradio_utils.py ADDED
@@ -0,0 +1,409 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+ import base64
3
+ import html
4
+ import os
5
+ import re
6
+ from urllib import parse
7
+
8
+ import json
9
+ import markdown
10
+ from gradio.components import Chatbot as ChatBotBase
11
+ from modelscope_agent.action_parser import MRKLActionParser
12
+ from PIL import Image
13
+
14
+ ALREADY_CONVERTED_MARK = '<!-- ALREADY CONVERTED BY PARSER. -->'
15
+
16
+
17
+ # 图片本地路径转换为 base64 格式
18
+ def covert_image_to_base64(image_path):
19
+ # 获得文件后缀名
20
+ ext = image_path.split('.')[-1]
21
+ if ext not in ['gif', 'jpeg', 'png']:
22
+ ext = 'jpeg'
23
+
24
+ with open(image_path, 'rb') as image_file:
25
+ # Read the file
26
+ encoded_string = base64.b64encode(image_file.read())
27
+
28
+ # Convert bytes to string
29
+ base64_data = encoded_string.decode('utf-8')
30
+
31
+ # 生成base64编码的地址
32
+ base64_url = f'data:image/{ext};base64,{base64_data}'
33
+ return base64_url
34
+
35
+
36
+ def convert_url(text, new_filename):
37
+ # Define the pattern to search for
38
+ # This pattern captures the text inside the square brackets, the path, and the filename
39
+ pattern = r'!\[([^\]]+)\]\(([^)]+)\)'
40
+
41
+ # Define the replacement pattern
42
+ # \1 is a backreference to the text captured by the first group ([^\]]+)
43
+ replacement = rf'![\1]({new_filename})'
44
+
45
+ # Replace the pattern in the text with the replacement
46
+ return re.sub(pattern, replacement, text)
47
+
48
+
49
+ def format_cover_html(configuration, bot_avatar_path):
50
+ if bot_avatar_path:
51
+ image_src = covert_image_to_base64(bot_avatar_path)
52
+ else:
53
+ image_src = '//img.alicdn.com/imgextra/i3/O1CN01YPqZFO1YNZerQfSBk_!!6000000003047-0-tps-225-225.jpg'
54
+ return f"""
55
+ <div class="bot_cover">
56
+ <div class="bot_avatar">
57
+ <img src={image_src} />
58
+ </div>
59
+ <div class="bot_name">{configuration.get("name", "")}</div>
60
+ <div class="bot_desp">{configuration.get("description", "")}</div>
61
+ </div>
62
+ """
63
+
64
+
65
+ def format_goto_publish_html(label, zip_url, agent_user_params, disable=False):
66
+ if disable:
67
+ return f"""<div class="publish_link_container">
68
+ <a class="disabled">{label}</a>
69
+ </div>
70
+ """
71
+ else:
72
+ params = {'AGENT_URL': zip_url}
73
+ params.update(agent_user_params)
74
+ template = 'modelscope/agent_template'
75
+ params_str = json.dumps(params)
76
+ link_url = f'https://www.modelscope.cn/studios/fork?target={template}&overwriteEnv={parse.quote(params_str)}'
77
+ return f"""
78
+ <div class="publish_link_container">
79
+ <a href="{link_url}" target="_blank">{label}</a>
80
+ </div>
81
+ """
82
+
83
+
84
+ class ChatBot(ChatBotBase):
85
+
86
+ def normalize_markdown(self, bot_message):
87
+ lines = bot_message.split('\n')
88
+ normalized_lines = []
89
+ inside_list = False
90
+
91
+ for i, line in enumerate(lines):
92
+ if re.match(r'^(\d+\.|-|\*|\+)\s', line.strip()):
93
+ if not inside_list and i > 0 and lines[i - 1].strip() != '':
94
+ normalized_lines.append('')
95
+ inside_list = True
96
+ normalized_lines.append(line)
97
+ elif inside_list and line.strip() == '':
98
+ if i < len(lines) - 1 and not re.match(r'^(\d+\.|-|\*|\+)\s',
99
+ lines[i + 1].strip()):
100
+ normalized_lines.append(line)
101
+ continue
102
+ else:
103
+ inside_list = False
104
+ normalized_lines.append(line)
105
+
106
+ return '\n'.join(normalized_lines)
107
+
108
+ def convert_markdown(self, bot_message):
109
+ if bot_message.count('```') % 2 != 0:
110
+ bot_message += '\n```'
111
+
112
+ bot_message = self.normalize_markdown(bot_message)
113
+
114
+ result = markdown.markdown(
115
+ bot_message,
116
+ extensions=[
117
+ 'toc', 'extra', 'tables', 'codehilite',
118
+ 'markdown_cjk_spacing.cjk_spacing', 'pymdownx.magiclink'
119
+ ],
120
+ extension_configs={
121
+ 'markdown_katex': {
122
+ 'no_inline_svg': True, # fix for WeasyPrint
123
+ 'insert_fonts_css': True,
124
+ },
125
+ 'codehilite': {
126
+ 'linenums': False,
127
+ 'guess_lang': True
128
+ },
129
+ 'mdx_truly_sane_lists': {
130
+ 'nested_indent': 2,
131
+ 'truly_sane': True,
132
+ }
133
+ })
134
+ result = ''.join(result)
135
+ return result
136
+
137
+ @staticmethod
138
+ def prompt_parse(message):
139
+ output = ''
140
+ if 'Thought' in message:
141
+ if 'Action' in message or 'Action Input:' in message:
142
+ re_pattern_thought = re.compile(
143
+ pattern=r'([\s\S]+)Thought:([\s\S]+)Action:')
144
+
145
+ res = re_pattern_thought.search(message)
146
+
147
+ if res is None:
148
+ re_pattern_thought_only = re.compile(
149
+ pattern=r'Thought:([\s\S]+)Action:')
150
+ res = re_pattern_thought_only.search(message)
151
+ llm_result = ''
152
+ else:
153
+ llm_result = res.group(1).strip()
154
+ action_thought_result = res.group(2).strip()
155
+
156
+ re_pattern_action = re.compile(
157
+ pattern=
158
+ r'Action:([\s\S]+)Action Input:([\s\S]+)<\|startofexec\|>')
159
+ res = re_pattern_action.search(message)
160
+ if res is None:
161
+ action, action_parameters = MRKLActionParser(
162
+ ).parse_response(message)
163
+ else:
164
+ action = res.group(1).strip()
165
+ action_parameters = res.group(2)
166
+ action_result = json.dumps({
167
+ 'api_name': action,
168
+ 'parameters': action_parameters
169
+ })
170
+ output += f'{llm_result}\n{action_thought_result}\n<|startofthink|>\n{action_result}\n<|endofthink|>\n'
171
+ if '<|startofexec|>' in message:
172
+ re_pattern3 = re.compile(
173
+ pattern=r'<\|startofexec\|>([\s\S]+)<\|endofexec\|>')
174
+ res3 = re_pattern3.search(message)
175
+ observation = res3.group(1).strip()
176
+ output += f'\n<|startofexec|>\n{observation}\n<|endofexec|>\n'
177
+ if 'Final Answer' in message:
178
+ re_pattern2 = re.compile(
179
+ pattern=r'Thought:([\s\S]+)Final Answer:([\s\S]+)')
180
+ res2 = re_pattern2.search(message)
181
+ # final_thought_result = res2.group(1).strip()
182
+ final_answer_result = res2.group(2).strip()
183
+ output += f'{final_answer_result}\n'
184
+
185
+ if output == '':
186
+ return message
187
+ print(output)
188
+ return output
189
+ else:
190
+ return message
191
+
192
+ def convert_bot_message(self, bot_message):
193
+
194
+ bot_message = ChatBot.prompt_parse(bot_message)
195
+ # print('processed bot message----------')
196
+ # print(bot_message)
197
+ # print('processed bot message done')
198
+ start_pos = 0
199
+ result = ''
200
+ find_json_pattern = re.compile(r'{[\s\S]+}')
201
+ START_OF_THINK_TAG, END_OF_THINK_TAG = '<|startofthink|>', '<|endofthink|>'
202
+ START_OF_EXEC_TAG, END_OF_EXEC_TAG = '<|startofexec|>', '<|endofexec|>'
203
+ while start_pos < len(bot_message):
204
+ try:
205
+ start_of_think_pos = bot_message.index(START_OF_THINK_TAG,
206
+ start_pos)
207
+ end_of_think_pos = bot_message.index(END_OF_THINK_TAG,
208
+ start_pos)
209
+ if start_pos < start_of_think_pos:
210
+ result += self.convert_markdown(
211
+ bot_message[start_pos:start_of_think_pos])
212
+ think_content = bot_message[start_of_think_pos
213
+ + len(START_OF_THINK_TAG
214
+ ):end_of_think_pos].strip()
215
+ json_content = find_json_pattern.search(think_content)
216
+ think_content = json_content.group(
217
+ ) if json_content else think_content
218
+ try:
219
+ think_node = json.loads(think_content)
220
+ plugin_name = think_node.get(
221
+ 'plugin_name',
222
+ think_node.get('plugin',
223
+ think_node.get('api_name', 'unknown')))
224
+ summary = f'选择插件【{plugin_name}】,调用处理中...'
225
+ del think_node['url']
226
+ # think_node.pop('url', None)
227
+
228
+ detail = f'```json\n\n{json.dumps(think_node, indent=3, ensure_ascii=False)}\n\n```'
229
+ except Exception:
230
+ summary = '思考中...'
231
+ detail = think_content
232
+ # traceback.print_exc()
233
+ # detail += traceback.format_exc()
234
+ result += '<details> <summary>' + summary + '</summary>' + self.convert_markdown(
235
+ detail) + '</details>'
236
+ # print(f'detail:{detail}')
237
+ start_pos = end_of_think_pos + len(END_OF_THINK_TAG)
238
+ except Exception:
239
+ # result += traceback.format_exc()
240
+ break
241
+ # continue
242
+
243
+ try:
244
+ start_of_exec_pos = bot_message.index(START_OF_EXEC_TAG,
245
+ start_pos)
246
+ end_of_exec_pos = bot_message.index(END_OF_EXEC_TAG, start_pos)
247
+ # print(start_of_exec_pos)
248
+ # print(end_of_exec_pos)
249
+ # print(bot_message[start_of_exec_pos:end_of_exec_pos])
250
+ # print('------------------------')
251
+ if start_pos < start_of_exec_pos:
252
+ result += self.convert_markdown(
253
+ bot_message[start_pos:start_of_think_pos])
254
+ exec_content = bot_message[start_of_exec_pos
255
+ + len(START_OF_EXEC_TAG
256
+ ):end_of_exec_pos].strip()
257
+ try:
258
+ summary = '完成插件调用.'
259
+ detail = f'```json\n\n{exec_content}\n\n```'
260
+ except Exception:
261
+ pass
262
+
263
+ result += '<details> <summary>' + summary + '</summary>' + self.convert_markdown(
264
+ detail) + '</details>'
265
+
266
+ start_pos = end_of_exec_pos + len(END_OF_EXEC_TAG)
267
+ except Exception:
268
+ # result += traceback.format_exc()
269
+ continue
270
+ if start_pos < len(bot_message):
271
+ result += self.convert_markdown(bot_message[start_pos:])
272
+ result += ALREADY_CONVERTED_MARK
273
+ return result
274
+
275
+ def convert_bot_message_for_qwen(self, bot_message):
276
+
277
+ start_pos = 0
278
+ result = ''
279
+ find_json_pattern = re.compile(r'{[\s\S]+}')
280
+ ACTION = 'Action:'
281
+ ACTION_INPUT = 'Action Input'
282
+ OBSERVATION = 'Observation'
283
+ RESULT_START = '<result>'
284
+ RESULT_END = '</result>'
285
+ while start_pos < len(bot_message):
286
+ try:
287
+ action_pos = bot_message.index(ACTION, start_pos)
288
+ action_input_pos = bot_message.index(ACTION_INPUT, start_pos)
289
+ result += self.convert_markdown(
290
+ bot_message[start_pos:action_pos])
291
+ # Action: image_gen
292
+ # Action Input
293
+ # {"text": "金庸武侠 世界", "resolution": "1280x720"}
294
+ # Observation: <result>![IMAGEGEN](https://dashscope-result-sh.oss-cn-shanghai.aliyuncs.com/1d/e9/20231116/723609ee/d046d2d9-0c95-420b-9467-f0e831f5e2b7-1.png?Expires=1700227460&OSSAccessKeyId=LTAI5tQZd8AEcZX6KZV4G8qL&Signature=R0PlEazQF9uBD%2Fh9tkzOkJMGyg8%3D)<result> # noqa E501
295
+ action_name = bot_message[action_pos
296
+ + len(ACTION
297
+ ):action_input_pos].strip()
298
+ # action_start action_end 使用 Action Input 到 Observation 之间
299
+ action_input_end = bot_message[action_input_pos:].index(
300
+ OBSERVATION) - 1
301
+ action_input = bot_message[action_input_pos:action_input_pos
302
+ + action_input_end].strip()
303
+ is_json = find_json_pattern.search(action_input)
304
+ if is_json:
305
+ action_input = is_json.group()
306
+ else:
307
+ action_input = re.sub(r'^Action Input[:]?[\s]*', '',
308
+ action_input)
309
+
310
+ summary = f'调用工具 {action_name}'
311
+ if is_json:
312
+ detail = f'```json\n\n{json.dumps(json.loads(action_input), indent=4, ensure_ascii=False)}\n\n```'
313
+ else:
314
+ detail = action_input
315
+ result += '<details> <summary>' + summary + '</summary>' + self.convert_markdown(
316
+ detail) + '</details>'
317
+ start_pos = action_input_pos + action_input_end + 1
318
+ try:
319
+ observation_pos = bot_message.index(OBSERVATION, start_pos)
320
+ idx = observation_pos + len(OBSERVATION)
321
+ obs_message = bot_message[idx:]
322
+ observation_start_id = obs_message.index(
323
+ RESULT_START) + len(RESULT_START)
324
+ observation_end_idx = obs_message.index(RESULT_END)
325
+ summary = '完成调用'
326
+ exec_content = obs_message[
327
+ observation_start_id:observation_end_idx]
328
+ detail = f'```\n\n{exec_content}\n\n```'
329
+ start_pos = idx + observation_end_idx + len(RESULT_END)
330
+ except Exception:
331
+ summary = '执行中...'
332
+ detail = ''
333
+ exec_content = None
334
+
335
+ result += '<details> <summary>' + summary + '</summary>' + self.convert_markdown(
336
+ detail) + '</details>'
337
+ if exec_content is not None and '[IMAGEGEN]' in exec_content:
338
+ # convert local file to base64
339
+ re_pattern = re.compile(pattern=r'!\[[^\]]+\]\(([^)]+)\)')
340
+ res = re_pattern.search(exec_content)
341
+ if res:
342
+ image_path = res.group(1).strip()
343
+ if os.path.isfile(image_path):
344
+ exec_content = convert_url(
345
+ exec_content,
346
+ covert_image_to_base64(image_path))
347
+ result += self.convert_markdown(f'{exec_content}')
348
+
349
+ except Exception:
350
+ # import traceback; traceback.print_exc()
351
+ result += self.convert_markdown(bot_message[start_pos:])
352
+ start_pos = len(bot_message[start_pos:])
353
+ break
354
+
355
+ result += ALREADY_CONVERTED_MARK
356
+ return result
357
+
358
+ def postprocess(
359
+ self,
360
+ message_pairs: list[list[str | tuple[str] | tuple[str, str] | None]
361
+ | tuple],
362
+ ) -> list[list[str | dict | None]]:
363
+ """
364
+ Parameters:
365
+ message_pairs: List of lists representing the message and response pairs.
366
+ Each message and response should be a string, which may be in Markdown format.
367
+ It can also be a tuple whose first element is a string or pathlib.
368
+ Path filepath or URL to an image/video/audio, and second (optional) element is the alt text,
369
+ in which case the media file is displayed. It can also be None, in which case that message is not displayed.
370
+ Returns:
371
+ List of lists representing the message and response. Each message and response will be a string of HTML,
372
+ or a dictionary with media information. Or None if the message is not to be displayed.
373
+ """
374
+ if message_pairs is None:
375
+ return []
376
+ processed_messages = []
377
+ for message_pair in message_pairs:
378
+ assert isinstance(
379
+ message_pair, (tuple, list)
380
+ ), f'Expected a list of lists or list of tuples. Received: {message_pair}'
381
+ assert (
382
+ len(message_pair) == 2
383
+ ), f'Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}'
384
+ if isinstance(message_pair[0], tuple) or isinstance(
385
+ message_pair[1], tuple):
386
+ processed_messages.append([
387
+ self._postprocess_chat_messages(message_pair[0]),
388
+ self._postprocess_chat_messages(message_pair[1]),
389
+ ])
390
+ else:
391
+ # 处理不是元组的情况
392
+ user_message, bot_message = message_pair
393
+
394
+ if user_message and not user_message.endswith(
395
+ ALREADY_CONVERTED_MARK):
396
+ convert_md = self.convert_markdown(
397
+ html.escape(user_message))
398
+ user_message = f'{convert_md}' + ALREADY_CONVERTED_MARK
399
+ if bot_message and not bot_message.endswith(
400
+ ALREADY_CONVERTED_MARK):
401
+ # bot_message = self.convert_bot_message(bot_message)
402
+ bot_message = self.convert_bot_message_for_qwen(
403
+ bot_message)
404
+ processed_messages.append([
405
+ user_message,
406
+ bot_message,
407
+ ])
408
+
409
+ return processed_messages
help_tools.py ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from http import HTTPStatus
3
+
4
+ import json
5
+ import requests
6
+ from config_utils import DEFAULT_BUILDER_CONFIG_DIR, get_user_cfg_file
7
+ from dashscope import ImageSynthesis
8
+ from modelscope_agent.tools import Tool
9
+ from modelscope_agent.utils.logger import agent_logger as logger
10
+
11
+ from modelscope.utils.config import Config
12
+
13
+ LOGO_NAME = 'custom_bot_avatar.png'
14
+ LOGO_PATH = os.path.join(DEFAULT_BUILDER_CONFIG_DIR, LOGO_NAME)
15
+
16
+ CONFIG_FORMAT = """
17
+ {
18
+ "name": ... # CustomGPT的名字。
19
+ "description": ... # CustomGPT 的简介。
20
+ "instructions": ... # CustomGPT 的功能要求,类型是string。
21
+ "prompt_recommend": ... # CustomGPT 的起始交互语句,类型是一个字符串数组,起始为[]。
22
+ }
23
+ """
24
+
25
+ CONF_GENERATOR_INST = """你现在要扮演一个 CustomGPT 的配置生成器
26
+
27
+ 在接下来的对话中,每次均生成如下格式的内容:
28
+
29
+ {config_format}
30
+
31
+ 现在,已知原始配置为{old_config},用户在原始配置上有一些建议修改项,包括:
32
+ 1. 用户建议的 CustomGPT 的名称为{app_name}
33
+ 2. CustomGPT 的描述为{app_description}
34
+ 3. CustomGPT 的启动器为{app_conversation_starter}
35
+
36
+ 请你参考原始配置生成新的修改后的配置,请注意:
37
+ 1. 如果用户对原本的简介、功能要求、交互语句不满意,则直接换掉原本的简介、功能要求、交互语句。
38
+ 2. 如果用户对原本的简介、功能要求、交互语句比较满意,参考用户的起始交互语句和原配置中的起始交互语句,生成新的简介、功能要求、交互语句。
39
+ 3. 如果原始配置没有实际内容,请你根据你的知识帮助用户生成第一个版本的配置,简介在100字左右,功能要求在150字左右,起始交互语句在4条左右。
40
+
41
+ 请你生成新的配置文件,严格遵循给定格式,请不要创造其它字段,仅输出要求的json格式,请勿输出其它内容。
42
+ """
43
+
44
+ LOGO_INST = """定制化软件 CustomGPT 的作用是{description},{user_requirement}请你为它生成一个专业的logo"""
45
+
46
+
47
+ def get_logo_path(uuid_str=''):
48
+ logo_path = os.getenv('LOGO_PATH', LOGO_PATH)
49
+ # convert from ./config/builder_config.json to ./config/user/builder_config.json
50
+ logo_path = logo_path.replace('config/', 'config/user/')
51
+
52
+ # convert from ./config/user to ./config/uuid
53
+ if uuid_str != '':
54
+ logo_path = logo_path.replace('user', uuid_str)
55
+ if not os.path.exists(logo_path):
56
+ os.makedirs(os.path.dirname(logo_path), exist_ok=True)
57
+ return logo_path
58
+
59
+
60
+ def call_wanx(prompt, save_path, uuid_str):
61
+ rsp = ImageSynthesis.call(
62
+ model='wanx-lite', prompt=prompt, n=1, size='768*768')
63
+ if rsp.status_code == HTTPStatus.OK:
64
+ if os.path.exists(save_path):
65
+ os.remove(save_path)
66
+
67
+ # save file to current directory
68
+ for result in rsp.output.results:
69
+ with open(save_path, 'wb+') as f:
70
+ f.write(requests.get(result.url).content)
71
+ else:
72
+ logger.error(
73
+ uuid=uuid_str,
74
+ error='wanx error',
75
+ content={
76
+ 'wanx_status_code': rsp.status_code,
77
+ 'wanx_code': rsp.code,
78
+ 'wanx_message': rsp.message
79
+ })
80
+
81
+
82
+ class LogoGeneratorTool(Tool):
83
+ description = 'logo_designer是一个AI绘制logo的服务,输入用户对 CustomGPT 的要求,会生成 CustomGPT 的logo。'
84
+ name = 'logo_designer'
85
+ parameters: list = [{
86
+ 'name': 'user_requirement',
87
+ 'description': '用户对 CustomGPT logo的要求和建议',
88
+ 'required': True,
89
+ 'schema': {
90
+ 'type': 'string'
91
+ },
92
+ }]
93
+
94
+ def _remote_call(self, *args, **kwargs):
95
+ user_requirement = kwargs['user_requirement']
96
+ uuid_str = kwargs.get('uuid_str', '')
97
+ builder_cfg_file = get_user_cfg_file(uuid_str)
98
+ builder_cfg = Config.from_file(builder_cfg_file)
99
+
100
+ avatar_prompt = LOGO_INST.format(
101
+ description=builder_cfg.description,
102
+ user_requirement=user_requirement)
103
+ call_wanx(
104
+ prompt=avatar_prompt,
105
+ save_path=get_logo_path(uuid_str=uuid_str),
106
+ uuid_str=uuid_str)
107
+ builder_cfg.avatar = LOGO_NAME
108
+ return {'result': builder_cfg}
109
+
110
+
111
+ def config_conversion(generated_config: dict, save=False, uuid_str=''):
112
+ """
113
+ convert
114
+ {
115
+ name: "铁人",
116
+ description: "我希望我的AI-Agent是一个专业的健身教练,专注于力量训练方面,可以提供相关的建议和指南。
117
+ 它还可以帮我跟踪和记录每次的力量训练数据,以及提供相应的反馈和建议,帮助我不断改进和优化我的训练计划。
118
+ 此外,我希望它可以拥有一些特殊技能和功能,让它更加实用和有趣。例如,它可以帮助我预测未来的身体状况、分析我的营养摄入情况、
119
+ 提供心理支持等等。我相信,在它的帮助下,我可以更快地达到���己的目标,变得更加强壮和健康。",
120
+ instructions: [
121
+ "提供力量训练相关的建议和指南",
122
+ "跟踪和记录每次的力量训练数据",
123
+ "提供反馈和建议,帮助改进和优化训练计划",
124
+ "预测未来的身体状况",
125
+ "分析营养摄入情况",
126
+ "提供心理支持",
127
+ ],
128
+ prompt_recommend: [
129
+ "你好,今天的锻炼计划是什么呢?",
130
+ "你觉得哪种器械最适合练背部肌肉呢?",
131
+ "你觉得我现在的训练强度合适吗?",
132
+ "你觉得哪种食物最适合增肌呢?",
133
+ ],
134
+ logo_prompt: "设计一个肌肉男形象的Logo",
135
+ }
136
+ to
137
+ {
138
+ name: "铁人",
139
+ description: "我希望我的AI-Agent是一个专业的健身教练,专注于力量训练方面,可以提供相关的建议和指南。
140
+ 它还可以帮我跟踪和记录每次的力量训练数据,以及提供相应的反馈和建议,帮助我不断改进和优化我的训练计划。
141
+ 此外,我希望它可以拥有一些特殊技能和功能,让它更加实用和有趣。例如,它可以帮助我预测未来的身体状况、
142
+ 分析我的营养摄入情况、提供心理支持等等。我相信,在它的帮助下,我可以更快地达到自己的目标,变得更加强壮和健康。",
143
+ instructions: "提供力量训练相关的建议和指南;跟踪和记录每次的力量训练数据;提供反馈和建议,帮助改进和优化训练计划;
144
+ 预测未来的身体状况;分析营养摄入情况;提供心理支持",
145
+ prompt_recommend: [
146
+ "你好,今天的锻炼计划是什么呢?",
147
+ "你觉得哪种器械最适合练背部肌肉呢?",
148
+ "你觉得我现在的训练强度合适吗?",
149
+ "你觉得哪种食物最适合增肌呢?",
150
+ ],
151
+ tools: xxx
152
+ model: yyy
153
+ }
154
+ :param generated_config:
155
+ :return:
156
+ """
157
+ builder_cfg_file = get_user_cfg_file(uuid_str)
158
+ builder_cfg = Config.from_file(builder_cfg_file)
159
+ try:
160
+ builder_cfg.name = generated_config['name']
161
+ builder_cfg.description = generated_config['description']
162
+ builder_cfg.prompt_recommend = generated_config['prompt_recommend']
163
+ if isinstance(generated_config['instructions'], list):
164
+ builder_cfg.instruction = ';'.join(
165
+ generated_config['instructions'])
166
+ else:
167
+ builder_cfg.instruction = generated_config['instructions']
168
+ if save:
169
+ json.dump(
170
+ builder_cfg.to_dict(),
171
+ open(builder_cfg_file, 'w'),
172
+ indent=2,
173
+ ensure_ascii=False)
174
+ return builder_cfg
175
+ except ValueError as e:
176
+ raise ValueError(f'failed to save the configuration with info: {e}')
i18n.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # flake8: noqa
2
+ support_lang = ['zh-cn', 'en']
3
+
4
+ i18n = {
5
+ 'create': ['创建', 'Create'],
6
+ 'configure': ['配置', 'Configure'],
7
+ 'send': ['发送', 'Send'],
8
+ 'sendOnLoading': ['发送(Agent 加载中...)', 'Send (Agent Loading...)'],
9
+ 'upload_btn': ['上传文件', 'Upload File'],
10
+ 'message': ['输入', 'Send a message'],
11
+ 'message_placeholder': ['输入你的消息', 'Type your message here'],
12
+ 'prompt_suggestion': ['推荐提示词', 'Prompt Suggestions'],
13
+ 'form_avatar': ['头像', 'Avatar'],
14
+ 'form_name': ['名称', 'Name'],
15
+ 'form_name_placeholder': ['为你的 agent 取一个名字', 'Name your agent'],
16
+ 'form_description': ['描述', 'Description'],
17
+ 'form_description_placeholder': [
18
+ '为你的 agent 添加一段简短的描述',
19
+ 'Add a short description about what this agent does'
20
+ ],
21
+ 'form_instructions': ['指令', 'Instructions'],
22
+ 'form_instructions_placeholder': [
23
+ '你的 agent 需要处理哪些事情',
24
+ 'What does this agent do? How does it behave? What should it avoid doing?'
25
+ ],
26
+ 'form_model': ['模型', 'Model'],
27
+ 'form_agent_language': ['Agent 语言', 'Agent Language'],
28
+ 'form_prompt_suggestion':
29
+ ['推荐提示词,双击行可修改', 'prompt suggestion,double click to modify'],
30
+ 'form_knowledge': ['知识库', 'Knowledge Base'],
31
+ 'form_capabilities': ['内置能力', 'Capabilities'],
32
+ 'form_update_button': ['更新配置', 'Update Configuration'],
33
+ 'open_api_accordion': ['OpenAPI 配置', 'OpenAPI Configuration'],
34
+ 'preview': ['预览', 'Preview'],
35
+ 'build': ['构建', 'Build'],
36
+ 'publish': ['发布', 'Publish'],
37
+ 'import_config': ['导入配置', 'Import Config'],
38
+ 'space_addr': ['你的AGENT_URL', 'Yours AGENT_URL'],
39
+ 'input_space_addr': ['输入你的AGENT_URL', 'input your agent_url here'],
40
+ 'import_space': ['导入你的Agent', 'Import your existing agent'],
41
+ 'import_hint': [
42
+ '输入你创空间环境变量AGENT_URL,点击导入配置',
43
+ 'input your AGNET_URL which lies in your env of your space, then type Import Config'
44
+ ],
45
+ 'build_hint': ['点击"构建"完成构建', 'Click "Build" to finish building'],
46
+ 'publish_hint': [
47
+ '点击"发布"跳转创空间完成 Agent 发布',
48
+ 'Click "Publish" to jump to the space to finish agent publishing'
49
+ ],
50
+ 'publish_alert': [
51
+ """#### 注意:Agent实际发布时需要配置相关API的key。
52
+ - 千问、万相、艺术字等 DashScope API 所需: [申请入口](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key)
53
+ - 高德天气 API: [申请入口](https://lbs.amap.com/api/javascript-api-v2/guide/services/weather)""",
54
+ """#### Note: The key of the relevant API needs to be configured when the Agent is actually released.
55
+ - Qwen,Wanx,WordArt,etc DashScope API: [Application entrance](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key)
56
+ - Amap Weather API: [Application entrance](https://lbs.amap.com/api/javascript-api-v2/guide/services/weather)
57
+ """
58
+ ],
59
+ 'header': [
60
+ '<span style="font-size: 20px; font-weight: 500;">\N{fire} AgentFabric -- 由 Modelscope-agent 驱动 </span> [github 点赞](https://github.com/modelscope/modelscope-agent/tree/main)', # noqa E501
61
+ '<span style="font-size: 20px; font-weight: 500;">\N{fire} AgentFabric powered by Modelscope-agent </span> [github star](https://github.com/modelscope/modelscope-agent/tree/main)' # noqa E501
62
+ ],
63
+ }
64
+
65
+
66
+ class I18n():
67
+
68
+ def __init__(self, lang):
69
+ self.lang = lang
70
+ self.langIndex = support_lang.index(lang)
71
+
72
+ def get(self, field):
73
+ return i18n.get(field)[self.langIndex]
74
+
75
+ def get_whole(self, field):
76
+ return f'{i18n.get(field)[0]}({i18n.get(field)[1]})'
openapi_example/aigc_wordart_semantic.json ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "openapi":"3.1.0",
3
+ "info":{
4
+ "title":"WordArt Semantic Generation API",
5
+ "description":"API for generating semantic word art with customizable parameters.",
6
+ "version":"v1.0.0"
7
+ },
8
+ "servers":[
9
+ {
10
+ "url":"https://dashscope.aliyuncs.com"
11
+ }
12
+ ],
13
+ "paths":{
14
+ "/api/v1/services/aigc/wordart/semantic":{
15
+ "post":{
16
+ "summary":"Generate WordArt Semantically",
17
+ "operationId":"generateWordArt",
18
+ "tags":[
19
+ "WordArt Generation"
20
+ ],
21
+ "requestBody":{
22
+ "required":true,
23
+ "X-DashScope-Async":"enable",
24
+ "content":{
25
+ "application/json":{
26
+ "schema":{
27
+ "$ref":"#/components/schemas/WordArtGenerationRequest"
28
+ }
29
+ }
30
+ }
31
+ },
32
+ "responses":{
33
+ "200":{
34
+ "description":"Successful Response",
35
+ "content":{
36
+ "application/json":{
37
+ "schema":{
38
+ "$ref":"#/components/schemas/WordArtGenerationResponse"
39
+ }
40
+ }
41
+ }
42
+ }
43
+ },
44
+ "security":[
45
+ {
46
+ "BearerAuth":[
47
+
48
+ ]
49
+ }
50
+ ]
51
+ }
52
+ },
53
+ "/api/v1/tasks/{task_id}":{
54
+ "get":{
55
+ "summary":"Get WordArt Result",
56
+ "operationId":"getwordartresult",
57
+ "tags":[
58
+ "Get Result"
59
+ ],
60
+ "parameters":[
61
+ {
62
+ "name":"task_id",
63
+ "in":"path",
64
+ "required":true,
65
+ "description":"The unique identifier of the word art generation task",
66
+ "schema":{
67
+ "type":"string"
68
+ }
69
+ }
70
+ ],
71
+ "security":[
72
+ {
73
+ "BearerAuth":[
74
+
75
+ ]
76
+ }
77
+ ]
78
+ }
79
+ }
80
+ },
81
+ "components":{
82
+ "schemas":{
83
+ "WordArtGenerationRequest":{
84
+ "type":"object",
85
+ "properties":{
86
+ "model":{
87
+ "type":"string",
88
+ "enum":[
89
+ "wordart-semantic"
90
+ ]
91
+ },
92
+ "input":{
93
+ "type":"object",
94
+ "properties":{
95
+ "text":{
96
+ "type":"string",
97
+ "example":"文字创意",
98
+ "description":"用户想要转为艺术字的文本",
99
+ "required":true
100
+ },
101
+ "prompt":{
102
+ "type":"string",
103
+ "example":"水果,蔬菜,温暖的色彩空间",
104
+ "description":"用户对艺术字的风格要求,可能是形状、颜色、实体等方面的要求",
105
+ "required":true
106
+ }
107
+ }
108
+ },
109
+ "parameters":{
110
+ "type":"object",
111
+ "properties":{
112
+ "steps":{
113
+ "type":"integer",
114
+ "example":80
115
+ },
116
+ "n":{
117
+ "type":"number",
118
+ "example":2
119
+ }
120
+ }
121
+ }
122
+ },
123
+ "required":[
124
+ "model",
125
+ "input",
126
+ "parameters"
127
+ ]
128
+ },
129
+ "WordArtGenerationResponse":{
130
+ "type":"object",
131
+ "properties":{
132
+ "output":{
133
+ "type":"string",
134
+ "description":"Generated word art image URL or data."
135
+ }
136
+ }
137
+ }
138
+ },
139
+ "securitySchemes":{
140
+ "ApiKeyAuth":{
141
+ "type":"apiKey",
142
+ "in":"header",
143
+ "name":"Authorization"
144
+ }
145
+ }
146
+ }
147
+ }
openapi_example/aigc_wordart_texture.json ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "openapi":"3.1.0",
3
+ "info":{
4
+ "title":"WordArt Texture Generation API",
5
+ "description":"API for generating textured word art with customizable parameters.",
6
+ "version":"v1.0.0"
7
+ },
8
+ "servers":[
9
+ {
10
+ "url":"https://dashscope.aliyuncs.com"
11
+ }
12
+ ],
13
+ "paths":{
14
+ "/api/v1/services/aigc/wordart/texture":{
15
+ "post":{
16
+ "summary":"Generate Textured WordArt",
17
+ "operationId":"generate_textured_WordArt",
18
+ "tags":[
19
+ "WordArt Generation"
20
+ ],
21
+ "requestBody":{
22
+ "required":true,
23
+ "X-DashScope-Async":"enable",
24
+ "content":{
25
+ "application/json":{
26
+ "schema":{
27
+ "$ref":"#/components/schemas/WordArtGenerationRequest"
28
+ }
29
+ }
30
+ }
31
+ },
32
+ "responses":{
33
+ "200":{
34
+ "description":"Successful Response",
35
+ "content":{
36
+ "application/json":{
37
+ "schema":{
38
+ "$ref":"#/components/schemas/WordArtGenerationResponse"
39
+ }
40
+ }
41
+ }
42
+ }
43
+ },
44
+ "security":[
45
+ {
46
+ "BearerAuth":[
47
+
48
+ ]
49
+ }
50
+ ]
51
+ }
52
+ },
53
+ "/api/v1/tasks/{task_id}":{
54
+ "get":{
55
+ "summary":"Get WordArt Result",
56
+ "operationId":"getwordartresult",
57
+ "tags":[
58
+ "Get Result"
59
+ ],
60
+ "parameters":[
61
+ {
62
+ "name":"task_id",
63
+ "in":"path",
64
+ "required":true,
65
+ "description":"The unique identifier of the word art generation task",
66
+ "schema":{
67
+ "type":"string"
68
+ }
69
+ }
70
+ ],
71
+ "security":[
72
+ {
73
+ "BearerAuth":[
74
+
75
+ ]
76
+ }
77
+ ]
78
+ }
79
+ }
80
+ },
81
+ "components":{
82
+ "schemas":{
83
+ "WordArtGenerationRequest":{
84
+ "type":"object",
85
+ "properties":{
86
+ "model":{
87
+ "type":"string",
88
+ "enum":[
89
+ "wordart-texture"
90
+ ]
91
+ },
92
+ "input":{
93
+ "type":"object",
94
+ "properties":{
95
+ "text":{
96
+ "type":"object",
97
+ "properties":{
98
+ "text_content":{
99
+ "type":"string",
100
+ "example":"文字纹理",
101
+ "description":"用户想要转为艺术字的文本",
102
+ "required":true
103
+ },
104
+ "font_name":{
105
+ "type":"string",
106
+ "example":"dongfangdakai",
107
+ "description":"用户想要转为艺术字的字体格式",
108
+ "required":true
109
+ }
110
+ }
111
+ },
112
+ "prompt":{
113
+ "type":"string",
114
+ "example":"水果,蔬菜,温暖的色彩空间",
115
+ "description":"用户对艺术字的风格要求,可能是形状、颜色、实体等方面的要求",
116
+ "required":true
117
+ }
118
+ }
119
+ },
120
+ "parameters":{
121
+ "type":"object",
122
+ "properties":{
123
+ "n":{
124
+ "type":"number",
125
+ "example":2
126
+ }
127
+ }
128
+ }
129
+ },
130
+ "required":[
131
+ "model",
132
+ "input",
133
+ "parameters"
134
+ ]
135
+ },
136
+ "WordArtGenerationResponse":{
137
+ "type":"object",
138
+ "properties":{
139
+ "output":{
140
+ "type":"string",
141
+ "description":"Generated word art image URL or data."
142
+ }
143
+ }
144
+ }
145
+ },
146
+ "securitySchemes":{
147
+ "ApiKeyAuth":{
148
+ "type":"apiKey",
149
+ "in":"header",
150
+ "name":"Authorization"
151
+ }
152
+ }
153
+ }
154
+ }
publish_util.py ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import os
3
+ import re
4
+ import shutil
5
+ import zipfile
6
+ from configparser import ConfigParser
7
+ from urllib.parse import unquote, urlparse
8
+
9
+ import json
10
+ import oss2
11
+ import requests
12
+ from version import __ms_version__ as MS_VERSION
13
+
14
+ from modelscope.utils.config import Config
15
+
16
+ main_version = MS_VERSION.split('rc')[0]
17
+ sub_version = main_version
18
+ if len(MS_VERSION.split('rc')) > 1:
19
+ sub_version += 'rc' + MS_VERSION.split('rc')[1]
20
+
21
+ DEFAULT_MS_PKG = f'https://modelscope-agent.oss-cn-hangzhou.aliyuncs.com/releases/v{main_version}/modelscope_agent-{sub_version}-py3-none-any.whl' # noqa E501
22
+
23
+
24
+ def upload_to_oss(bucket, local_file_path, oss_file_path):
25
+ # 上传文件到阿里云OSS
26
+ bucket.put_object_from_file(oss_file_path, local_file_path)
27
+
28
+ # 设置文件的公共读权限
29
+ bucket.put_object_acl(oss_file_path, oss2.OBJECT_ACL_PUBLIC_READ)
30
+
31
+ # 获取文件的公共链接
32
+ file_url = f"https://{bucket.bucket_name}.{bucket.endpoint.replace('http://', '')}/{oss_file_path}"
33
+ return file_url
34
+
35
+
36
+ def get_oss_config():
37
+ # 尝试从环境变量中读取配置
38
+ access_key_id = os.getenv('OSS_ACCESS_KEY_ID')
39
+ access_key_secret = os.getenv('OSS_ACCESS_KEY_SECRET')
40
+ endpoint = os.getenv('OSS_ENDPOINT')
41
+ bucket_name = os.getenv('OSS_BUCKET_NAME')
42
+
43
+ # 如果环境变量没有设置,尝试从.ossutilconfig文件中读取
44
+ if not access_key_id or not access_key_secret or not endpoint or not bucket_name:
45
+ config = ConfigParser()
46
+ config.read(os.path.expanduser('~/.ossutilconfig'))
47
+ if 'Credentials' in config:
48
+ access_key_id = config.get('Credentials', 'accessKeyId')
49
+ access_key_secret = config.get('Credentials', 'accessKeySecret')
50
+ endpoint = config.get('Credentials', 'endpoint')
51
+ bucket_name = config.get('Credentials', 'bucketName')
52
+
53
+ return access_key_id, access_key_secret, endpoint, bucket_name
54
+
55
+
56
+ def pop_user_info_from_config(src_dir, uuid_str):
57
+ """ Remove all personal information from the configuration files and return this data.
58
+ The purpose of this is to ensure that personal information is not stored in plain text
59
+ when releasing.
60
+
61
+ Args:
62
+ src_dir (str): config root path
63
+ uuid_str (str): user id
64
+ """
65
+ user_info = {}
66
+
67
+ # deal with plugin cfg
68
+ plugin_config_path = f'{src_dir}/config/{uuid_str}/openapi_plugin_config.json'
69
+ if os.path.exists(plugin_config_path):
70
+ with open(plugin_config_path, 'r') as f:
71
+ plugin_config = json.load(f)
72
+ if 'auth' in plugin_config:
73
+ if plugin_config['auth']['type'] == 'API Key':
74
+ user_info['apikey'] = plugin_config['auth'].pop('apikey')
75
+ user_info['apikey_type'] = plugin_config['auth'].pop(
76
+ 'apikey_type')
77
+ with open(plugin_config_path, 'w') as f:
78
+ json.dump(plugin_config, f, indent=2, ensure_ascii=False)
79
+
80
+ return user_info
81
+
82
+
83
+ def prepare_agent_zip(agent_name, src_dir, uuid_str, state):
84
+ # 设置阿里云OSS的认证信息
85
+ local_file = os.path.abspath(os.path.dirname(__file__))
86
+ ak_id, ak_secret, endpoint, bucket_name = get_oss_config()
87
+ auth = oss2.Auth(ak_id, ak_secret)
88
+ bucket = oss2.Bucket(auth, endpoint, bucket_name)
89
+
90
+ new_directory = f'{src_dir}/upload/{uuid_str}' # 新目录的路径
91
+
92
+ # 创建新目录
93
+ if os.path.exists(new_directory):
94
+ shutil.rmtree(new_directory)
95
+ os.makedirs(new_directory)
96
+
97
+ # 复制config下的uuid_str目录到new_directory下并改名为local_user
98
+ uuid_str_path = f'{src_dir}/config/{uuid_str}' # 指向uuid_str目录的路径
99
+ local_user_path = f'{new_directory}/config' # 新的目录路径
100
+ shutil.copytree(uuid_str_path, local_user_path, dirs_exist_ok=True)
101
+
102
+ target_conf = os.path.join(local_user_path, 'builder_config.json')
103
+ builder_cfg = Config.from_file(target_conf)
104
+ builder_cfg.knowledge = [
105
+ 'config/' + f.split('/')[-1] for f in builder_cfg.knowledge
106
+ ]
107
+ with open(target_conf, 'w') as f:
108
+ json.dump(builder_cfg.to_dict(), f, indent=2, ensure_ascii=False)
109
+
110
+ # 复制config目录下所有.json文件到new_directory/config
111
+ config_path = f'{local_file}/config'
112
+ new_config_path = f'{new_directory}/config'
113
+
114
+ def find_json_and_images(directory):
115
+ # 确保路径以斜杠结束
116
+ directory = os.path.join(directory, '')
117
+
118
+ # 找到所有的JSON文件
119
+ json_files = [
120
+ os.path.join(directory, 'model_config.json'),
121
+ os.path.join(directory, 'tool_config.json'),
122
+ ]
123
+
124
+ # 找到所有的图片文件
125
+ image_files = glob.glob(directory + '*.png') + \
126
+ glob.glob(directory + '*.jpg') + \
127
+ glob.glob(directory + '*.jpeg') + \
128
+ glob.glob(directory + '*.gif') # 根据需要可以添加更多图片格式
129
+
130
+ return json_files + image_files
131
+
132
+ for f in find_json_and_images(config_path):
133
+ shutil.copy(f, new_config_path)
134
+
135
+ # 复制assets目录到new_directory
136
+ assets_path = f'{local_file}/assets'
137
+ new_assets_path = f'{new_directory}/assets'
138
+ shutil.copytree(assets_path, new_assets_path, dirs_exist_ok=True)
139
+
140
+ # 在requirements.txt中添加新的行
141
+ requirements_file = f'{local_file}/requirements.txt'
142
+ new_requirements_file = f'{new_directory}/requirements.txt'
143
+ modelscope_agent_pkg = DEFAULT_MS_PKG.replace('version', MS_VERSION)
144
+ with open(requirements_file, 'r') as file:
145
+ content = file.readlines()
146
+ with open(new_requirements_file, 'w') as file:
147
+ file.write(modelscope_agent_pkg + '\n')
148
+ file.writelines(content)
149
+
150
+ # 复制.py文件到新目录
151
+ for file in os.listdir(local_file):
152
+ if file.endswith('.py'):
153
+ shutil.copy(f'{local_file}/{file}', new_directory)
154
+
155
+ # 打包新目录
156
+ archive_path = shutil.make_archive(new_directory, 'zip', new_directory)
157
+
158
+ # 使用抽象出的函数上传到OSS并设置权限
159
+ file_url = upload_to_oss(bucket, archive_path,
160
+ f'agents/user/{uuid_str}/{agent_name}.zip')
161
+
162
+ shutil.rmtree(new_directory)
163
+
164
+ # 获取必须设置的envs
165
+ envs_required = {}
166
+ for t in builder_cfg.tools:
167
+ if t == 'amap_weather':
168
+ envs_required['AMAP_TOKEN'] = 'Your-AMAP-TOKEN'
169
+ return file_url, envs_required
170
+
171
+
172
+ def parse_version_from_file(file_path):
173
+ # 用于匹配 __version__ 行的正则表达式
174
+ version_pattern = r"^__version__\s*=\s*['\"]([^'\"]+)['\"]"
175
+
176
+ try:
177
+ with open(file_path, 'r') as file:
178
+ for line in file:
179
+ # 检查每一行是否匹配版本模式
180
+ match = re.match(version_pattern, line.strip())
181
+ if match:
182
+ # 返回匹配的版本号
183
+ return match.group(1)
184
+ return None # 如果文件中没有找到版本号
185
+ except FileNotFoundError:
186
+ return None # 如果文件不存在
187
+
188
+
189
+ def reload_agent_zip(agent_url, dst_dir, uuid_str, state):
190
+ # download zip from agent_url, and unzip to dst_dir/uuid_str
191
+ # 从URL中解析出文件名
192
+ parsed_url = urlparse(agent_url)
193
+ filename = os.path.basename(parsed_url.path)
194
+ zip_path = os.path.join(dst_dir, filename)
195
+
196
+ # 提取agent_name(去掉'.zip')
197
+ agent_name, _ = os.path.splitext(filename)
198
+
199
+ # 创建临时解压目录
200
+ temp_extract_dir = os.path.join(dst_dir, f'temp_{uuid_str}')
201
+ if os.path.exists(temp_extract_dir):
202
+ shutil.rmtree(temp_extract_dir)
203
+ os.makedirs(temp_extract_dir)
204
+
205
+ # 下载ZIP文件
206
+ response = requests.get(agent_url)
207
+ if response.status_code == 200:
208
+ with open(zip_path, 'wb') as file:
209
+ file.write(response.content)
210
+ else:
211
+ raise RuntimeError(
212
+ f'download file from {agent_url} error:\n {response.reason}')
213
+
214
+ # 解压ZIP文件到临时目录
215
+ with zipfile.ZipFile(zip_path, 'r') as zip_ref:
216
+ zip_ref.extractall(temp_extract_dir)
217
+
218
+ # 解析version信息
219
+ version = parse_version_from_file(
220
+ os.path.join(temp_extract_dir, '/version.py'))
221
+ print(f'agent fabric version: {version}')
222
+ # 创建目标config路径
223
+ target_config_path = os.path.join(dst_dir, 'config', uuid_str)
224
+ if os.path.exists(target_config_path):
225
+ shutil.rmtree(target_config_path)
226
+ os.makedirs(target_config_path)
227
+
228
+ # 复制config目录
229
+ # 兼容老版本配置放到local_user目录下,以及新版本直接放在config目录下
230
+ if os.path.exists(os.path.join(temp_extract_dir, 'config', 'local_user')):
231
+ config_source_path = os.path.join(temp_extract_dir, 'config',
232
+ 'local_user')
233
+ elif os.path.exists(os.path.join(temp_extract_dir, 'config')):
234
+ config_source_path = os.path.join(temp_extract_dir, 'config')
235
+ else:
236
+ raise RuntimeError('未找到正确的配置文件信息')
237
+
238
+ if os.path.exists(config_source_path):
239
+ for item in os.listdir(config_source_path):
240
+ s = os.path.join(config_source_path, item)
241
+ d = os.path.join(target_config_path, item)
242
+ if os.path.isdir(s):
243
+ shutil.copytree(s, d, dirs_exist_ok=True)
244
+ else:
245
+ shutil.copy2(s, d)
246
+
247
+ # 清理:删除临时目录和下载的ZIP文件
248
+ shutil.rmtree(temp_extract_dir)
249
+ os.remove(zip_path)
250
+
251
+ # 修改知识库路径 config/xxx to /tmp/agentfabric/config/$uuid/xxx
252
+ target_conf = os.path.join(target_config_path, 'builder_config.json')
253
+ builder_cfg = Config.from_file(target_conf)
254
+ builder_cfg.knowledge = [
255
+ f'{target_config_path}/' + f.split('/')[-1]
256
+ for f in builder_cfg.knowledge
257
+ ]
258
+ with open(target_conf, 'w') as f:
259
+ json.dump(builder_cfg.to_dict(), f, indent=2, ensure_ascii=False)
260
+
261
+ return agent_name
262
+
263
+
264
+ if __name__ == '__main__':
265
+ src_dir = os.path.abspath(os.path.dirname(__file__))
266
+ url, envs = prepare_agent_zip('test', src_dir, 'local_user', {})
267
+ print(url)
268
+
269
+ agent_name = reload_agent_zip(url, '/tmp/agentfabric_test', 'local_user',
270
+ {})
271
+ print(agent_name)
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ dashscope
2
+ faiss-cpu
3
+ gradio==3.47.1
4
+ langchain
5
+ markdown-cjk-spacing
6
+ mdx_truly_sane_lists
7
+ pymdown-extensions
8
+ python-slugify
9
+ unstructured
response.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"status_code": 500, "request_id": "0e8e65da-ee20-9c49-920a-94ca1df6ec09", "code": "InternalError.Algo", "message": "InternalError.Algo", "output": null, "usage": null}
user_core.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import ssl
3
+
4
+ import gradio as gr
5
+ import nltk
6
+ from config_utils import parse_configuration
7
+ from custom_prompt import CustomPromptGenerator
8
+ from custom_prompt_zh import ZhCustomPromptGenerator
9
+ from langchain.embeddings import ModelScopeEmbeddings
10
+ from langchain.vectorstores import FAISS
11
+ from modelscope_agent import prompt_generator_register
12
+ from modelscope_agent.agent import AgentExecutor
13
+ from modelscope_agent.agent_types import AgentType
14
+ from modelscope_agent.llm import LLMFactory
15
+ from modelscope_agent.retrieve import KnowledgeRetrieval
16
+ from modelscope_agent.tools.openapi_plugin import OpenAPIPluginTool
17
+ from modelscope_agent.utils.logger import agent_logger as logger
18
+
19
+ prompts = {
20
+ 'CustomPromptGenerator': CustomPromptGenerator,
21
+ 'ZhCustomPromptGenerator': ZhCustomPromptGenerator,
22
+ }
23
+ prompt_generator_register(prompts)
24
+
25
+ # try:
26
+ # _create_unverified_https_context = ssl._create_unverified_context
27
+ # except AttributeError:
28
+ # pass
29
+ # else:
30
+ # ssl._create_default_https_context = _create_unverified_https_context
31
+ #
32
+ # nltk.download('punkt')
33
+ # nltk.download('averaged_perceptron_tagger')
34
+
35
+
36
+ # init user chatbot_agent
37
+ def init_user_chatbot_agent(uuid_str=''):
38
+ builder_cfg, model_cfg, tool_cfg, available_tool_list, plugin_cfg, available_plugin_list = parse_configuration(
39
+ uuid_str)
40
+ # set top_p and stop_words for role play
41
+ model_cfg[builder_cfg.model]['generate_cfg']['top_p'] = 0.5
42
+ model_cfg[builder_cfg.model]['generate_cfg']['stop'] = 'Observation'
43
+
44
+ # build model
45
+ logger.info(
46
+ uuid=uuid_str,
47
+ message=f'using model {builder_cfg.model}',
48
+ content={'model_config': model_cfg[builder_cfg.model]})
49
+
50
+ # # check configuration
51
+ # if builder_cfg.model in ['qwen-max', 'qwen-72b-api', 'qwen-14b-api', 'qwen-plus']:
52
+ # if 'DASHSCOPE_API_KEY' not in os.environ:
53
+ # raise gr.Error('DASHSCOPE_API_KEY should be set via setting environment variable')
54
+
55
+ try:
56
+ llm = LLMFactory.build_llm(builder_cfg.model, model_cfg)
57
+ except Exception as e:
58
+ raise gr.Error(str(e))
59
+
60
+ # build prompt with zero shot react template
61
+ prompt_generator = builder_cfg.get('prompt_generator', None)
62
+ if builder_cfg.model.startswith('qwen') and not prompt_generator:
63
+ prompt_generator = 'CustomPromptGenerator'
64
+ language = builder_cfg.get('language', 'en')
65
+ if language == 'zh':
66
+ prompt_generator = 'ZhCustomPromptGenerator'
67
+
68
+ prompt_cfg = {
69
+ 'prompt_generator':
70
+ prompt_generator,
71
+ 'add_addition_round':
72
+ True,
73
+ 'knowledge_file_name':
74
+ os.path.basename(builder_cfg.knowledge[0]
75
+ if len(builder_cfg.knowledge) > 0 else ''),
76
+ 'uuid_str':
77
+ uuid_str
78
+ }
79
+
80
+ # get knowledge
81
+ # 开源版本的向量库配置
82
+ model_id = 'damo/nlp_gte_sentence-embedding_chinese-base'
83
+ embeddings = ModelScopeEmbeddings(model_id=model_id)
84
+ available_knowledge_list = []
85
+ for item in builder_cfg.knowledge:
86
+ # if isfile and end with .txt, .md, .pdf, support only those file
87
+ if os.path.isfile(item) and item.endswith(('.txt', '.md', '.pdf')):
88
+ available_knowledge_list.append(item)
89
+ if len(available_knowledge_list) > 0:
90
+ knowledge_retrieval = KnowledgeRetrieval.from_file(
91
+ available_knowledge_list, embeddings, FAISS)
92
+ else:
93
+ knowledge_retrieval = None
94
+
95
+ additional_tool_list = add_openapi_plugin_to_additional_tool(
96
+ plugin_cfg, available_plugin_list)
97
+ # build agent
98
+ agent = AgentExecutor(
99
+ llm,
100
+ additional_tool_list=additional_tool_list,
101
+ tool_cfg=tool_cfg,
102
+ agent_type=AgentType.MRKL,
103
+ knowledge_retrieval=knowledge_retrieval,
104
+ tool_retrieval=False,
105
+ **prompt_cfg)
106
+ agent.set_available_tools(available_tool_list + available_plugin_list)
107
+ return agent
108
+
109
+
110
+ def add_openapi_plugin_to_additional_tool(plugin_cfgs, available_plugin_list):
111
+ additional_tool_list = {}
112
+ for name, cfg in plugin_cfgs.items():
113
+ openapi_plugin_object = OpenAPIPluginTool(name=name, cfg=plugin_cfgs)
114
+ additional_tool_list[name] = openapi_plugin_object
115
+ return additional_tool_list
116
+
117
+
118
+ def user_chatbot_single_run(query, agent):
119
+ agent.run(query)
version.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ __version__ = '0.1.7'
2
+ __ms_version__ = '0.2.4rc2'