qingxu98 commited on
Commit
5c0a088
1 Parent(s): f5357f6
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +18 -9
  2. app.py +46 -22
  3. check_proxy.py +1 -1
  4. config.py +42 -9
  5. core_functional.py +2 -0
  6. crazy_functional.py +230 -105
  7. crazy_functions/CodeInterpreter.py +231 -0
  8. crazy_functions/Latex输出PDF结果.py +4 -4
  9. crazy_functions/crazy_utils.py +9 -4
  10. crazy_functions/json_fns/pydantic_io.py +111 -0
  11. crazy_functions/live_audio/aliyunASR.py +3 -4
  12. crazy_functions/pdf_fns/parse_pdf.py +6 -1
  13. crazy_functions/vt_fns/vt_call_plugin.py +114 -0
  14. crazy_functions/vt_fns/vt_modify_config.py +81 -0
  15. crazy_functions/vt_fns/vt_state.py +28 -0
  16. crazy_functions/批量Markdown翻译.py +2 -0
  17. crazy_functions/批量翻译PDF文档_NOUGAT.py +271 -0
  18. crazy_functions/批量翻译PDF文档_多线程.py +4 -4
  19. crazy_functions/联网的ChatGPT.py +5 -1
  20. crazy_functions/联网的ChatGPT_bing版.py +5 -1
  21. crazy_functions/虚空终端.py +171 -111
  22. crazy_functions/语音助手.py +9 -5
  23. crazy_functions/谷歌检索小助手.py +93 -26
  24. docker-compose.yml +2 -2
  25. docs/Dockerfile+ChatGLM +1 -61
  26. docs/Dockerfile+JittorLLM +1 -59
  27. docs/Dockerfile+NoLocal+Latex +1 -27
  28. docs/GithubAction+AllCapacity +37 -0
  29. docs/GithubAction+ChatGLM+Moss +0 -1
  30. docs/GithubAction+NoLocal+Latex +5 -1
  31. docs/translate_english.json +288 -1
  32. docs/translate_std.json +6 -1
  33. multi_language.py +2 -0
  34. request_llm/bridge_all.py +16 -0
  35. request_llm/bridge_chatglmft.py +5 -5
  36. request_llm/bridge_chatgpt.py +7 -1
  37. request_llm/bridge_qianfan.py +3 -2
  38. request_llm/bridge_spark.py +15 -1
  39. request_llm/com_sparkapi.py +10 -3
  40. requirements.txt +1 -1
  41. tests/test_plugins.py +6 -1
  42. themes/common.css +21 -0
  43. themes/common.js +15 -9
  44. themes/contrast.css +482 -0
  45. themes/contrast.py +88 -0
  46. themes/default.css +49 -0
  47. themes/default.py +4 -2
  48. themes/green.css +5 -1
  49. themes/green.py +2 -0
  50. themes/theme.py +3 -0
README.md CHANGED
@@ -22,13 +22,13 @@ pinned: false
22
  **如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或函数插件,欢迎发pull requests!**
23
 
24
  If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
25
- To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
26
 
27
  > **Note**
28
  >
29
- > 1.请注意只有 **高亮(如红色)** 标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
30
  >
31
- > 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)
32
  >
33
  > 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM和Moss等等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
34
 
@@ -65,7 +65,8 @@ Latex论文一键校对 | [函数插件] 仿Grammarly对Latex文章进行语法
65
  [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
66
  ⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
67
  更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
68
- ⭐[虚空终端](https://github.com/binary-husky/void-terminal)pip包 | 脱离GUI,在Python中直接调用本项目的函数插件(开发中)
 
69
  更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
70
  </div>
71
 
@@ -114,7 +115,7 @@ cd gpt_academic
114
 
115
  在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。
116
 
117
- (P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。P.S.项目同样支持通过`环境变量`配置大多数选项,环境变量的书写格式参考`docker-compose`文件。读取优先级: `环境变量` > `config_private.py` > `config.py`)
118
 
119
 
120
  3. 安装依赖
@@ -160,11 +161,14 @@ python main.py
160
 
161
  ### 安装方法II:使用Docker
162
 
 
 
163
  1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
164
  [![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
165
  [![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
166
  [![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
167
 
 
168
  ``` sh
169
  git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
170
  cd gpt_academic # 进入路径
@@ -261,10 +265,13 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
261
  <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/9fdcc391-f823-464f-9322-f8719677043b" height="250" >
262
  </div>
263
 
264
- 3. 生成报告。大部分插件都会在执行结束后,生成工作报告
 
 
 
 
265
  <div align="center">
266
- <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="250" >
267
- <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="250" >
268
  </div>
269
 
270
  4. 模块化功能设计,简单的接口却能支持强大的功能
@@ -311,8 +318,10 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
311
  </div>
312
 
313
 
 
314
  ### II:版本:
315
- - version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级)
 
316
  - version 3.49: 支持百度千帆平台和文心一言
317
  - version 3.48: 支持阿里达摩院通义千问,上海AI-Lab书生,讯飞星火
318
  - version 3.46: 支持完全脱手操作的实时语音对话
 
22
  **如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或函数插件,欢迎发pull requests!**
23
 
24
  If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
25
+ To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
26
 
27
  > **Note**
28
  >
29
+ > 1.请注意只有 **高亮** 标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
30
  >
31
+ > 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)。[安装方法](#installation) | [配置说明](https://github.com/binary-husky/gpt_academic/wiki/%E9%A1%B9%E7%9B%AE%E9%85%8D%E7%BD%AE%E8%AF%B4%E6%98%8E)。
32
  >
33
  > 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM和Moss等等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
34
 
 
65
  [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
66
  ⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
67
  更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
68
+ ⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
69
+ ⭐虚空终端插件 | [函数插件] 用自然语言,直接调度本项目其他插件
70
  更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
71
  </div>
72
 
 
115
 
116
  在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。
117
 
118
+ (P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。`config_private.py`不受git管控,可以让您的隐私信息更加安全。P.S.项目同样支持通过`环境变量`配置大多数选项,环境变量的书写格式参考`docker-compose`文件。读取优先级: `环境变量` > `config_private.py` > `config.py`)
119
 
120
 
121
  3. 安装依赖
 
161
 
162
  ### 安装方法II:使用Docker
163
 
164
+ [![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
165
+
166
  1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
167
  [![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
168
  [![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
169
  [![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
170
 
171
+
172
  ``` sh
173
  git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
174
  cd gpt_academic # 进入路径
 
265
  <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/9fdcc391-f823-464f-9322-f8719677043b" height="250" >
266
  </div>
267
 
268
+ 3. 虚空终端(从自然语言输入中,理解用户意图+自动调用其他插件)
269
+
270
+ - 步骤一:输入 “ 请调用插件翻译PDF论文,地址为https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf ”
271
+ - 步骤二:点击“虚空终端”
272
+
273
  <div align="center">
274
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/66f1b044-e9ff-4eed-9126-5d4f3668f1ed" width="500" >
 
275
  </div>
276
 
277
  4. 模块化功能设计,简单的接口却能支持强大的功能
 
318
  </div>
319
 
320
 
321
+
322
  ### II:版本:
323
+ - version 3.60(todo): 优化虚空终端,引入code interpreter和更多插件
324
+ - version 3.50: 使用自然语言调用本项目的所有函数插件(虚空终端),支持插件分类,改进UI,设计新主题
325
  - version 3.49: 支持百度千帆平台和文心一言
326
  - version 3.48: 支持阿里达摩院通义千问,上海AI-Lab书生,讯飞星火
327
  - version 3.46: 支持完全脱手操作的实时语音对话
app.py CHANGED
@@ -7,18 +7,18 @@ def main():
7
  from request_llm.bridge_all import predict
8
  from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
9
  # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
10
- proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = \
11
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
12
  ENABLE_AUDIO, AUTO_CLEAR_TXT = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT')
 
13
  # 如果WEB_PORT是-1, 则随机选取WEB端口
14
  PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
15
- if not AUTHENTICATION: AUTHENTICATION = None
16
-
17
  from check_proxy import get_current_version
18
  from themes.theme import adjust_theme, advanced_css, theme_declaration
19
  initial_prompt = "Serve me as a writing and programming assistant."
20
  title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
21
- description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)"""
 
22
 
23
  # 问询记录, python 版本建议3.9+(越新越好)
24
  import logging, uuid
@@ -35,7 +35,10 @@ def main():
35
 
36
  # 高级函数插件
37
  from crazy_functional import get_crazy_functions
38
- crazy_fns = get_crazy_functions()
 
 
 
39
 
40
  # 处理markdown文本格式的转变
41
  gr.Chatbot.postprocess = format_io
@@ -85,25 +88,33 @@ def main():
85
  if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
86
  variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
87
  functional[k]["Button"] = gr.Button(k, variant=variant)
 
88
  with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn:
89
  with gr.Row():
90
  gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)")
 
 
 
91
  with gr.Row():
92
- for k in crazy_fns:
93
- if not crazy_fns[k].get("AsButton", True): continue
94
- variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
95
- crazy_fns[k]["Button"] = gr.Button(k, variant=variant)
96
- crazy_fns[k]["Button"].style(size="sm")
97
  with gr.Row():
98
  with gr.Accordion("更多函数插件", open=True):
99
- dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
 
 
 
 
100
  with gr.Row():
101
  dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False)
102
  with gr.Row():
103
  plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
104
  placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
105
  with gr.Row():
106
- switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
107
  with gr.Row():
108
  with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up:
109
  file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple")
@@ -114,7 +125,6 @@ def main():
114
  max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",)
115
  checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
116
  md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
117
-
118
  gr.Markdown(description)
119
  with gr.Accordion("备选输入区", open=True, visible=False, elem_id="input-panel2") as area_input_secondary:
120
  with gr.Row():
@@ -125,6 +135,7 @@ def main():
125
  resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
126
  stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
127
  clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
 
128
  # 功能区显示开关与功能区的互动
129
  def fn_area_visibility(a):
130
  ret = {}
@@ -162,19 +173,19 @@ def main():
162
  click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo)
163
  cancel_handles.append(click_handle)
164
  # 文件上传区,接收文件后与chatbot的互动
165
- file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes], [chatbot, txt, txt2])
166
  # 函数插件-固定按钮区
167
- for k in crazy_fns:
168
- if not crazy_fns[k].get("AsButton", True): continue
169
- click_handle = crazy_fns[k]["Button"].click(ArgsGeneralWrapper(crazy_fns[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo)
170
  click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
171
  cancel_handles.append(click_handle)
172
  # 函数插件-下拉菜单与随变按钮的互动
173
  def on_dropdown_changed(k):
174
- variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
175
  ret = {switchy_bt: gr.update(value=k, variant=variant)}
176
- if crazy_fns[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
177
- ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + crazy_fns[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
178
  else:
179
  ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
180
  return ret
@@ -185,13 +196,26 @@ def main():
185
  # 随变按钮的回调函数注册
186
  def route(request: gr.Request, k, *args, **kwargs):
187
  if k in [r"打开插件列表", r"请先从插���列表中选择"]: return
188
- yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(request, *args, **kwargs)
189
  click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
190
  click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
191
  cancel_handles.append(click_handle)
192
  # 终止按钮的回调函数注册
193
  stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
194
  stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
 
 
 
 
 
 
 
 
 
 
 
 
 
195
  if ENABLE_AUDIO:
196
  from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
197
  rad = RealtimeAudioDistribution()
 
7
  from request_llm.bridge_all import predict
8
  from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
9
  # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
10
+ proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
11
+ CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
12
  ENABLE_AUDIO, AUTO_CLEAR_TXT = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT')
13
+
14
  # 如果WEB_PORT是-1, 则随机选取WEB端口
15
  PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
 
 
16
  from check_proxy import get_current_version
17
  from themes.theme import adjust_theme, advanced_css, theme_declaration
18
  initial_prompt = "Serve me as a writing and programming assistant."
19
  title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
20
+ description = "代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic)"
21
+ description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)"
22
 
23
  # 问询记录, python 版本建议3.9+(越新越好)
24
  import logging, uuid
 
35
 
36
  # 高级函数插件
37
  from crazy_functional import get_crazy_functions
38
+ DEFAULT_FN_GROUPS, = get_conf('DEFAULT_FN_GROUPS')
39
+ plugins = get_crazy_functions()
40
+ all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')]))
41
+ match_group = lambda tags, groups: any([g in groups for g in tags.split('|')])
42
 
43
  # 处理markdown文本格式的转变
44
  gr.Chatbot.postprocess = format_io
 
88
  if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
89
  variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
90
  functional[k]["Button"] = gr.Button(k, variant=variant)
91
+ functional[k]["Button"].style(size="sm")
92
  with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn:
93
  with gr.Row():
94
  gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)")
95
+ with gr.Row(elem_id="input-plugin-group"):
96
+ plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS,
97
+ multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False)
98
  with gr.Row():
99
+ for k, plugin in plugins.items():
100
+ if not plugin.get("AsButton", True): continue
101
+ visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False
102
+ variant = plugins[k]["Color"] if "Color" in plugin else "secondary"
103
+ plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant, visible=visible).style(size="sm")
104
  with gr.Row():
105
  with gr.Accordion("更多函数插件", open=True):
106
+ dropdown_fn_list = []
107
+ for k, plugin in plugins.items():
108
+ if not match_group(plugin['Group'], DEFAULT_FN_GROUPS): continue
109
+ if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件
110
+ elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
111
  with gr.Row():
112
  dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False)
113
  with gr.Row():
114
  plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
115
  placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
116
  with gr.Row():
117
+ switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
118
  with gr.Row():
119
  with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up:
120
  file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple")
 
125
  max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",)
126
  checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
127
  md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
 
128
  gr.Markdown(description)
129
  with gr.Accordion("备选输入区", open=True, visible=False, elem_id="input-panel2") as area_input_secondary:
130
  with gr.Row():
 
135
  resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
136
  stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
137
  clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
138
+
139
  # 功能区显示开关与功能区的互动
140
  def fn_area_visibility(a):
141
  ret = {}
 
173
  click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo)
174
  cancel_handles.append(click_handle)
175
  # 文件上传区,接收文件后与chatbot的互动
176
+ file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
177
  # 函数插件-固定按钮区
178
+ for k in plugins:
179
+ if not plugins[k].get("AsButton", True): continue
180
+ click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo)
181
  click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
182
  cancel_handles.append(click_handle)
183
  # 函数插件-下拉菜单与随变按钮的互动
184
  def on_dropdown_changed(k):
185
+ variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary"
186
  ret = {switchy_bt: gr.update(value=k, variant=variant)}
187
+ if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
188
+ ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
189
  else:
190
  ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
191
  return ret
 
196
  # 随变按钮的回调函数注册
197
  def route(request: gr.Request, k, *args, **kwargs):
198
  if k in [r"打开插件列表", r"请先从插���列表中选择"]: return
199
+ yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs)
200
  click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
201
  click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
202
  cancel_handles.append(click_handle)
203
  # 终止按钮的回调函数注册
204
  stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
205
  stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
206
+ plugins_as_btn = {name:plugin for name, plugin in plugins.items() if plugin.get('Button', None)}
207
+ def on_group_change(group_list):
208
+ btn_list = []
209
+ fns_list = []
210
+ if not group_list: # 处理特殊情况:没有选择任何插件组
211
+ return [*[plugin['Button'].update(visible=False) for _, plugin in plugins_as_btn.items()], gr.Dropdown.update(choices=[])]
212
+ for k, plugin in plugins.items():
213
+ if plugin.get("AsButton", True):
214
+ btn_list.append(plugin['Button'].update(visible=match_group(plugin['Group'], group_list))) # 刷新按钮
215
+ if plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
216
+ elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表
217
+ return [*btn_list, gr.Dropdown.update(choices=fns_list)]
218
+ plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown])
219
  if ENABLE_AUDIO:
220
  from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
221
  rad = RealtimeAudioDistribution()
check_proxy.py CHANGED
@@ -5,7 +5,7 @@ def check_proxy(proxies):
5
  try:
6
  response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
7
  data = response.json()
8
- print(f'查询代理的地理位置,返回的结果是{data}')
9
  if 'country_name' in data:
10
  country = data['country_name']
11
  result = f"代理配置 {proxies_https}, 代理所在地:{country}"
 
5
  try:
6
  response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
7
  data = response.json()
8
+ # print(f'查询代理的地理位置,返回的结果是{data}')
9
  if 'country_name' in data:
10
  country = data['country_name']
11
  result = f"代理配置 {proxies_https}, 代理所在地:{country}"
config.py CHANGED
@@ -47,7 +47,11 @@ API_URL_REDIRECT = {}
47
  DEFAULT_WORKER_NUM = 3
48
 
49
 
50
- # 对话窗的高度
 
 
 
 
51
  CHATBOT_HEIGHT = 1115
52
 
53
 
@@ -75,8 +79,26 @@ MAX_RETRY = 2
75
  LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm"
76
  AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo", "spark", "azure-gpt-3.5"]
77
 
78
- # ChatGLM(2) Finetune Model Path (如果使用ChatGLM2微调模型,需要把"chatglmft"加入AVAIL_LLM_MODELS中)
79
- ChatGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
 
82
  # 本地LLM模型如ChatGLM的执行方式 CPU/GPU
@@ -92,10 +114,6 @@ CONCURRENT_COUNT = 100
92
  AUTO_CLEAR_TXT = False
93
 
94
 
95
- # 色彩主体,可选 ["Default", "Chuanhu-Small-and-Beautiful"]
96
- THEME = "Default"
97
-
98
-
99
  # 加一个live2d装饰
100
  ADD_WAIFU = False
101
 
@@ -161,10 +179,13 @@ HUGGINGFACE_ACCESS_TOKEN = "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"
161
  # 获取方法:复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
162
  GROBID_URLS = [
163
  "https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
164
- "https://shaocongma-grobid.hf.space","https://FBR123-grobid.hf.space",
165
  ]
166
 
167
 
 
 
 
168
 
169
  """
170
  在线大模型配置关联关系示意图
@@ -182,7 +203,7 @@ GROBID_URLS = [
182
  │ ├── AZURE_ENGINE
183
  │ └── API_URL_REDIRECT
184
 
185
- ├── "spark" 星火认知大模型
186
  │ ├── XFYUN_APPID
187
  │ ├── XFYUN_API_SECRET
188
  │ └── XFYUN_API_KEY
@@ -203,6 +224,18 @@ GROBID_URLS = [
203
  ├── NEWBING_STYLE
204
  └── NEWBING_COOKIES
205
 
 
 
 
 
 
 
 
 
 
 
 
 
206
 
207
 
208
  插件在线服务配置依赖关系示意图
 
47
  DEFAULT_WORKER_NUM = 3
48
 
49
 
50
+ # 色彩主题,可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
51
+ THEME = "Default"
52
+
53
+
54
+ # 对话窗的高度 (仅在LAYOUT="TOP-DOWN"时生效)
55
  CHATBOT_HEIGHT = 1115
56
 
57
 
 
79
  LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm"
80
  AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo", "spark", "azure-gpt-3.5"]
81
 
82
+ # 插件分类默认选项
83
+ DEFAULT_FN_GROUPS = ['对话', '编程', '学术']
84
+
85
+
86
+ # 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
87
+ LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
88
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "api2d-gpt-3.5-turbo",
89
+ "gpt-4", "api2d-gpt-4", "chatglm", "moss", "newbing", "stack-claude"]
90
+ # P.S. 其他可用的模型还包括 ["qianfan", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613",
91
+ # "spark", "sparkv2", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
92
+
93
+
94
+ # 百度千帆(LLM_MODEL="qianfan")
95
+ BAIDU_CLOUD_API_KEY = ''
96
+ BAIDU_CLOUD_SECRET_KEY = ''
97
+ BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat"
98
+
99
+
100
+ # 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
101
+ CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
102
 
103
 
104
  # 本地LLM模型如ChatGLM的执行方式 CPU/GPU
 
114
  AUTO_CLEAR_TXT = False
115
 
116
 
 
 
 
 
117
  # 加一个live2d装饰
118
  ADD_WAIFU = False
119
 
 
179
  # 获取方法:复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
180
  GROBID_URLS = [
181
  "https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
182
+ "https://shaocongma-grobid.hf.space","https://FBR123-grobid.hf.space", "https://yeku-grobid.hf.space",
183
  ]
184
 
185
 
186
+ # 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
187
+ ALLOW_RESET_CONFIG = False
188
+
189
 
190
  """
191
  在线大模型配置关联关系示意图
 
203
  │ ├── AZURE_ENGINE
204
  │ └── API_URL_REDIRECT
205
 
206
+ ├── "spark" 星火认知大模型 spark & sparkv2
207
  │ ├── XFYUN_APPID
208
  │ ├── XFYUN_API_SECRET
209
  │ └── XFYUN_API_KEY
 
224
  ├── NEWBING_STYLE
225
  └── NEWBING_COOKIES
226
 
227
+
228
+ 用户图形界面布局依赖关系示意图
229
+
230
+ ├── CHATBOT_HEIGHT 对话窗的高度
231
+ ├── CODE_HIGHLIGHT 代码高亮
232
+ ├── LAYOUT 窗口布局
233
+ ├── DARK_MODE 暗色模式 / 亮色模式
234
+ ├── DEFAULT_FN_GROUPS 插件分类默认选项
235
+ ├── THEME 色彩主题
236
+ ├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
237
+ ├── ADD_WAIFU 加一个live2d装饰
238
+ ├── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
239
 
240
 
241
  插件在线服务配置依赖关系示意图
core_functional.py CHANGED
@@ -63,6 +63,7 @@ def get_core_functions():
63
  "英译中": {
64
  "Prefix": r"翻译成地道的中文:" + "\n\n",
65
  "Suffix": r"",
 
66
  },
67
  "找图片": {
68
  "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
@@ -78,6 +79,7 @@ def get_core_functions():
78
  "Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
79
  r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
80
  r"Items need to be transformed:",
 
81
  "Suffix": r"",
82
  }
83
  }
 
63
  "英译中": {
64
  "Prefix": r"翻译成地道的中文:" + "\n\n",
65
  "Suffix": r"",
66
+ "Visible": False,
67
  },
68
  "找图片": {
69
  "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
 
79
  "Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
80
  r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
81
  r"Items need to be transformed:",
82
+ "Visible": False,
83
  "Suffix": r"",
84
  }
85
  }
crazy_functional.py CHANGED
@@ -2,7 +2,6 @@ from toolbox import HotReload # HotReload 的意思是热更新,修改函数
2
 
3
 
4
  def get_crazy_functions():
5
- ###################### 第一组插件 ###########################
6
  from crazy_functions.读文章写摘要 import 读文章写摘要
7
  from crazy_functions.生成函数注释 import 批量生成函数注释
8
  from crazy_functions.解析项目源代码 import 解析项目本身
@@ -25,204 +24,258 @@ def get_crazy_functions():
25
  from crazy_functions.对话历史存档 import 载入对话历史存档
26
  from crazy_functions.对话历史存档 import 删除所有本地对话历史记录
27
  from crazy_functions.辅助功能 import 清除缓存
28
-
29
  from crazy_functions.批量Markdown翻译 import Markdown英译中
 
 
 
 
 
 
 
 
 
 
 
 
30
  function_plugins = {
 
 
 
 
 
 
31
  "解析整个Python项目": {
32
- "Color": "stop", # 按钮颜色
 
 
 
33
  "Function": HotReload(解析一个Python项目)
34
  },
35
  "载入对话历史存档(先上传存档或输入路径)": {
 
36
  "Color": "stop",
37
- "AsButton":False,
 
38
  "Function": HotReload(载入对话历史存档)
39
  },
40
- "删除所有本地对话历史记录(请谨慎操作)": {
41
- "AsButton":False,
 
 
42
  "Function": HotReload(删除所有本地对话历史记录)
43
  },
44
- "清除所有缓存文件(请谨慎操作)": {
 
45
  "Color": "stop",
46
  "AsButton": False, # 加入下拉菜单中
 
47
  "Function": HotReload(清除缓存)
48
  },
49
- "解析Jupyter Notebook文件": {
50
- "Color": "stop",
51
- "AsButton":False,
52
- "Function": HotReload(解析ipynb文件),
53
- "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
54
- "ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
55
- },
56
  "批量总结Word文档": {
 
57
  "Color": "stop",
 
 
58
  "Function": HotReload(总结word文档)
59
  },
60
  "解析整个C++项目头文件": {
61
- "Color": "stop", # 按钮颜色
 
62
  "AsButton": False, # 加入下拉菜单中
 
63
  "Function": HotReload(解析一个C项目的头文件)
64
  },
65
  "解析整个C++项目(.cpp/.hpp/.c/.h)": {
66
- "Color": "stop", # 按钮颜色
 
67
  "AsButton": False, # 加入下拉菜单中
 
68
  "Function": HotReload(解析一个C项目)
69
  },
70
  "解析整个Go项目": {
71
- "Color": "stop", # 按钮颜色
 
72
  "AsButton": False, # 加入下拉菜单中
 
73
  "Function": HotReload(解析一个Golang项目)
74
  },
75
  "解析整个Rust项目": {
76
- "Color": "stop", # 按钮���色
 
77
  "AsButton": False, # 加入下拉菜单中
 
78
  "Function": HotReload(解析一个Rust项目)
79
  },
80
  "解析整个Java项目": {
81
- "Color": "stop", # 按钮颜色
 
82
  "AsButton": False, # 加入下拉菜单中
 
83
  "Function": HotReload(解析一个Java项目)
84
  },
85
  "解析整个前端项目(js,ts,css等)": {
86
- "Color": "stop", # 按钮颜色
 
87
  "AsButton": False, # 加入下拉菜单中
 
88
  "Function": HotReload(解析一个前端项目)
89
  },
90
  "解析整个Lua项目": {
91
- "Color": "stop", # 按钮颜色
 
92
  "AsButton": False, # 加入下拉菜单中
 
93
  "Function": HotReload(解析一个Lua项目)
94
  },
95
  "解析整个CSharp项目": {
96
- "Color": "stop", # 按钮颜色
 
97
  "AsButton": False, # 加入下拉菜单中
 
98
  "Function": HotReload(解析一个CSharp项目)
99
  },
 
 
 
 
 
 
 
 
 
100
  "读Tex论文写摘要": {
101
- "Color": "stop", # 按钮颜色
 
 
 
102
  "Function": HotReload(读文章写摘要)
103
  },
104
- "Markdown/Readme英译中": {
105
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
106
  "Color": "stop",
 
 
 
 
 
 
 
 
 
107
  "Function": HotReload(Markdown英译中)
108
  },
109
  "批量生成函数注释": {
110
- "Color": "stop", # 按钮颜色
 
111
  "AsButton": False, # 加入下拉菜单中
 
112
  "Function": HotReload(批量生成函数注释)
113
  },
114
  "保存当前的对话": {
 
 
 
115
  "Function": HotReload(对话历史存档)
116
  },
117
- "[多线程Demo] 解析此项目本身(源码自译解)": {
 
118
  "AsButton": False, # 加入下拉菜单中
 
119
  "Function": HotReload(解析项目本身)
120
  },
121
- # "[老旧的Demo] 把本项目源代码切换成全英文": {
122
- # # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
123
- # "AsButton": False, # 加入下拉菜单中
124
- # "Function": HotReload(全项目切换英文)
125
- # },
126
- "[插件demo] 历史上的今天": {
127
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
128
  "Function": HotReload(高阶功能模板函数)
129
  },
130
-
131
- }
132
- ###################### 第二组插件 ###########################
133
- # [第二组插件]: 经过充分测试
134
- from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
135
- # from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer
136
- from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
137
- from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
138
- from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
139
- from crazy_functions.Latex全文润色 import Latex中文润色
140
- from crazy_functions.Latex全文润色 import Latex英文纠错
141
- from crazy_functions.Latex全文翻译 import Latex中译英
142
- from crazy_functions.Latex全文翻译 import Latex英译中
143
- from crazy_functions.批量Markdown翻译 import Markdown中译英
144
-
145
- function_plugins.update({
146
- "批量翻译PDF文档(多线程)": {
147
  "Color": "stop",
148
- "AsButton": True, # 加入下拉菜单中
 
149
  "Function": HotReload(批量翻译PDF文档)
150
  },
151
  "询问多个GPT模型": {
152
- "Color": "stop", # 按钮颜色
 
 
153
  "Function": HotReload(同时问询)
154
  },
155
- "[测试功能] 批量总结PDF文档": {
 
156
  "Color": "stop",
157
  "AsButton": False, # 加入下拉菜单中
158
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
159
  "Function": HotReload(批量总结PDF文档)
160
  },
161
- # "[测试功能] 批量总结PDF文档pdfminer": {
162
- # "Color": "stop",
163
- # "AsButton": False, # 加入下拉菜单中
164
- # "Function": HotReload(批量总结PDF文档pdfminer)
165
- # },
166
  "谷歌学术检索助手(输入谷歌学术搜索页url)": {
 
167
  "Color": "stop",
168
  "AsButton": False, # 加入下拉菜单中
 
169
  "Function": HotReload(谷歌检索小助手)
170
  },
171
  "理解PDF文档内容 (模仿ChatPDF)": {
172
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
173
  "Color": "stop",
174
  "AsButton": False, # 加入下拉菜单中
 
175
  "Function": HotReload(理解PDF文档内容标准文件输入)
176
  },
177
  "英文Latex项目全文润色(输入路径或上传压缩包)": {
178
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
179
  "Color": "stop",
180
  "AsButton": False, # 加入下拉菜单中
 
181
  "Function": HotReload(Latex英文润色)
182
  },
183
  "英文Latex项目全文纠错(输入路径或上传压缩包)": {
184
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
185
  "Color": "stop",
186
  "AsButton": False, # 加入下拉菜单中
 
187
  "Function": HotReload(Latex英文纠错)
188
  },
189
  "中文Latex项目全文润色(输入路径或上传压缩包)": {
190
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
191
  "Color": "stop",
192
  "AsButton": False, # 加入下拉菜单中
 
193
  "Function": HotReload(Latex中文润色)
194
  },
195
  "Latex项目全文中译英(输入路径或上传压缩包)": {
196
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
197
  "Color": "stop",
198
  "AsButton": False, # 加入下拉菜单中
 
199
  "Function": HotReload(Latex中译英)
200
  },
201
  "Latex项目全文英译���(输入路径或上传压缩包)": {
202
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
203
  "Color": "stop",
204
  "AsButton": False, # 加入下拉菜单中
 
205
  "Function": HotReload(Latex英译中)
206
  },
207
  "批量Markdown中译英(输入路径或上传压缩包)": {
208
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
209
  "Color": "stop",
210
  "AsButton": False, # 加入下拉菜单中
 
211
  "Function": HotReload(Markdown中译英)
212
  },
 
213
 
214
-
215
- })
216
-
217
- ###################### 第三组插件 ###########################
218
- # [第三组插件]: 尚未充分测试的函数插件
219
-
220
  try:
221
  from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
222
  function_plugins.update({
223
  "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
 
224
  "Color": "stop",
225
  "AsButton": False, # 加入下拉菜单中
 
226
  "Function": HotReload(下载arxiv论文并翻译摘要)
227
  }
228
  })
@@ -233,16 +286,20 @@ def get_crazy_functions():
233
  from crazy_functions.联网的ChatGPT import 连接网络回答问题
234
  function_plugins.update({
235
  "连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
 
236
  "Color": "stop",
237
  "AsButton": False, # 加入下拉菜单中
 
238
  "Function": HotReload(连接网络回答问题)
239
  }
240
  })
241
  from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
242
  function_plugins.update({
243
  "连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
 
244
  "Color": "stop",
245
  "AsButton": False, # 加入下拉菜单中
 
246
  "Function": HotReload(连接bing搜索回答问题)
247
  }
248
  })
@@ -253,10 +310,11 @@ def get_crazy_functions():
253
  from crazy_functions.解析项目源代码 import 解析任意code项目
254
  function_plugins.update({
255
  "解析项目源代码(手动指定和筛选源代码文件类型)": {
 
256
  "Color": "stop",
257
  "AsButton": False,
258
- "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
259
- "ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
260
  "Function": HotReload(解析任意code项目)
261
  },
262
  })
@@ -267,10 +325,11 @@ def get_crazy_functions():
267
  from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
268
  function_plugins.update({
269
  "询问多个GPT模型(手动指定询问哪些模型)": {
 
270
  "Color": "stop",
271
  "AsButton": False,
272
- "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
273
- "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
274
  "Function": HotReload(同时问询_指定模型)
275
  },
276
  })
@@ -281,10 +340,12 @@ def get_crazy_functions():
281
  from crazy_functions.图片生成 import 图片生成
282
  function_plugins.update({
283
  "图片生成(先切换模型到openai或api2d)": {
 
284
  "Color": "stop",
285
  "AsButton": False,
286
- "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
287
- "ArgsReminder": "在这里输入分辨率, 如256x256(默认)", # 高级参数输入区的显示提示
 
288
  "Function": HotReload(图片生成)
289
  },
290
  })
@@ -295,10 +356,12 @@ def get_crazy_functions():
295
  from crazy_functions.总结音视频 import 总结音视频
296
  function_plugins.update({
297
  "批量总结音视频(输入路径或上传压缩包)": {
 
298
  "Color": "stop",
299
  "AsButton": False,
300
  "AdvancedArgs": True,
301
  "ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
 
302
  "Function": HotReload(总结音视频)
303
  }
304
  })
@@ -309,8 +372,10 @@ def get_crazy_functions():
309
  from crazy_functions.数学动画生成manim import 动画生成
310
  function_plugins.update({
311
  "数学动画生成(Manim)": {
 
312
  "Color": "stop",
313
  "AsButton": False,
 
314
  "Function": HotReload(动画生成)
315
  }
316
  })
@@ -321,6 +386,7 @@ def get_crazy_functions():
321
  from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
322
  function_plugins.update({
323
  "Markdown翻译(手动指定语言)": {
 
324
  "Color": "stop",
325
  "AsButton": False,
326
  "AdvancedArgs": True,
@@ -335,6 +401,7 @@ def get_crazy_functions():
335
  from crazy_functions.Langchain知识库 import 知识库问答
336
  function_plugins.update({
337
  "构建知识库(请先上传文件素材)": {
 
338
  "Color": "stop",
339
  "AsButton": False,
340
  "AdvancedArgs": True,
@@ -349,6 +416,7 @@ def get_crazy_functions():
349
  from crazy_functions.Langchain知识库 import 读取知识库作答
350
  function_plugins.update({
351
  "知识库问答": {
 
352
  "Color": "stop",
353
  "AsButton": False,
354
  "AdvancedArgs": True,
@@ -358,11 +426,12 @@ def get_crazy_functions():
358
  })
359
  except:
360
  print('Load function plugin failed')
361
-
362
  try:
363
  from crazy_functions.交互功能函数模板 import 交互功能模板函数
364
  function_plugins.update({
365
  "交互功能模板函数": {
 
366
  "Color": "stop",
367
  "AsButton": False,
368
  "Function": HotReload(交互功能模板函数)
@@ -371,24 +440,11 @@ def get_crazy_functions():
371
  except:
372
  print('Load function plugin failed')
373
 
374
- # try:
375
- # from crazy_functions.chatglm微调工具 import 微调数据集生成
376
- # function_plugins.update({
377
- # "黑盒模型学习: 微调数据集生成 (先上传数据集)": {
378
- # "Color": "stop",
379
- # "AsButton": False,
380
- # "AdvancedArgs": True,
381
- # "ArgsReminder": "针对数据集输入(如 绿帽子*深蓝色衬衫*黑色运动裤)给出指令,例如您可以将以下命令复制到下方: --llm_to_learn=azure-gpt-3.5 --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、过去经历进行描写。要求:100字以内,用第二人称。' --system_prompt=''",
382
- # "Function": HotReload(微调数据集生成)
383
- # }
384
- # })
385
- # except:
386
- # print('Load function plugin failed')
387
-
388
  try:
389
  from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
390
  function_plugins.update({
391
  "Latex英文纠错+高亮修正位置 [需Latex]": {
 
392
  "Color": "stop",
393
  "AsButton": False,
394
  "AdvancedArgs": True,
@@ -399,41 +455,110 @@ def get_crazy_functions():
399
  from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
400
  function_plugins.update({
401
  "Arixv论文精细翻译(输入arxivID)[需Latex]": {
 
402
  "Color": "stop",
403
  "AsButton": False,
404
  "AdvancedArgs": True,
405
- "ArgsReminder":
406
- "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "+
407
- "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " + 'If the term "agent" is used in this section, it should be translated to "智能体". ',
 
 
408
  "Function": HotReload(Latex翻译中文并重新编译PDF)
409
  }
410
  })
411
  function_plugins.update({
412
  "本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
 
413
  "Color": "stop",
414
  "AsButton": False,
415
  "AdvancedArgs": True,
416
- "ArgsReminder":
417
- "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "+
418
- "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " + 'If the term "agent" is used in this section, it should be translated to "智能体". ',
 
 
419
  "Function": HotReload(Latex翻译中文并重新编译PDF)
420
  }
421
  })
422
  except:
423
  print('Load function plugin failed')
424
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
425
  # try:
426
- # from crazy_functions.虚空终端 import 终端
427
  # function_plugins.update({
428
- # "超级终端": {
429
  # "Color": "stop",
430
  # "AsButton": False,
431
- # # "AdvancedArgs": True,
432
- # # "ArgsReminder": "",
433
- # "Function": HotReload(终端)
434
  # }
435
  # })
436
  # except:
437
  # print('Load function plugin failed')
438
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
439
  return function_plugins
 
2
 
3
 
4
  def get_crazy_functions():
 
5
  from crazy_functions.读文章写摘要 import 读文章写摘要
6
  from crazy_functions.生成函数注释 import 批量生成函数注释
7
  from crazy_functions.解析项目源代码 import 解析项目本身
 
24
  from crazy_functions.对话历史存档 import 载入对话历史存档
25
  from crazy_functions.对话历史存档 import 删除所有本地对话历史记录
26
  from crazy_functions.辅助功能 import 清除缓存
 
27
  from crazy_functions.批量Markdown翻译 import Markdown英译中
28
+ from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
29
+ from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
30
+ from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
31
+ from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
32
+ from crazy_functions.Latex全文润色 import Latex中文润色
33
+ from crazy_functions.Latex全文润色 import Latex英文纠错
34
+ from crazy_functions.Latex全文翻译 import Latex中译英
35
+ from crazy_functions.Latex全文翻译 import Latex英译中
36
+ from crazy_functions.批量Markdown翻译 import Markdown中译英
37
+ from crazy_functions.虚空终端 import 虚空终端
38
+
39
+
40
  function_plugins = {
41
+ "虚空终端": {
42
+ "Group": "对话|编程|学术",
43
+ "Color": "stop",
44
+ "AsButton": True,
45
+ "Function": HotReload(虚空终端)
46
+ },
47
  "解析整个Python项目": {
48
+ "Group": "编程",
49
+ "Color": "stop",
50
+ "AsButton": True,
51
+ "Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
52
  "Function": HotReload(解析一个Python项目)
53
  },
54
  "载入对话历史存档(先上传存档或输入路径)": {
55
+ "Group": "对话",
56
  "Color": "stop",
57
+ "AsButton": False,
58
+ "Info": "载入对话历史存档 | 输入参数为路径",
59
  "Function": HotReload(载入对话历史存档)
60
  },
61
+ "删除所有本地对话历史记录(谨慎操作)": {
62
+ "Group": "对话",
63
+ "AsButton": False,
64
+ "Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
65
  "Function": HotReload(删除所有本地对话历史记录)
66
  },
67
+ "清除所有缓存文件(谨慎操作)": {
68
+ "Group": "对话",
69
  "Color": "stop",
70
  "AsButton": False, # 加入下拉菜单中
71
+ "Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
72
  "Function": HotReload(清除缓存)
73
  },
 
 
 
 
 
 
 
74
  "批量总结Word文档": {
75
+ "Group": "学术",
76
  "Color": "stop",
77
+ "AsButton": True,
78
+ "Info": "批量总结word文档 | 输入参数为路径",
79
  "Function": HotReload(总结word文档)
80
  },
81
  "解析整个C++项目头文件": {
82
+ "Group": "编程",
83
+ "Color": "stop",
84
  "AsButton": False, # 加入下拉菜单中
85
+ "Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
86
  "Function": HotReload(解析一个C项目的头文件)
87
  },
88
  "解析整个C++项目(.cpp/.hpp/.c/.h)": {
89
+ "Group": "编程",
90
+ "Color": "stop",
91
  "AsButton": False, # 加入下拉菜单中
92
+ "Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
93
  "Function": HotReload(解析一个C项目)
94
  },
95
  "解析整个Go项目": {
96
+ "Group": "编程",
97
+ "Color": "stop",
98
  "AsButton": False, # 加入下拉菜单中
99
+ "Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
100
  "Function": HotReload(解析一个Golang项目)
101
  },
102
  "解析整个Rust项目": {
103
+ "Group": "编程",
104
+ "Color": "stop",
105
  "AsButton": False, # 加入下拉菜单中
106
+ "Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
107
  "Function": HotReload(解析一个Rust项目)
108
  },
109
  "解析整个Java项目": {
110
+ "Group": "编程",
111
+ "Color": "stop",
112
  "AsButton": False, # 加入下拉菜单中
113
+ "Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
114
  "Function": HotReload(解析一个Java项目)
115
  },
116
  "解析整个前端项目(js,ts,css等)": {
117
+ "Group": "编程",
118
+ "Color": "stop",
119
  "AsButton": False, # 加入下拉菜单中
120
+ "Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
121
  "Function": HotReload(解析一个前端项目)
122
  },
123
  "解析整个Lua项目": {
124
+ "Group": "编程",
125
+ "Color": "stop",
126
  "AsButton": False, # 加入下拉菜单中
127
+ "Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
128
  "Function": HotReload(解析一个Lua项目)
129
  },
130
  "解析整个CSharp项目": {
131
+ "Group": "编程",
132
+ "Color": "stop",
133
  "AsButton": False, # 加入下拉菜单中
134
+ "Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
135
  "Function": HotReload(解析一个CSharp项目)
136
  },
137
+ "解析Jupyter Notebook文件": {
138
+ "Group": "编程",
139
+ "Color": "stop",
140
+ "AsButton": False,
141
+ "Info": "解析Jupyter Notebook文件 | 输入参数为路径",
142
+ "Function": HotReload(解析ipynb文件),
143
+ "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
144
+ "ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
145
+ },
146
  "读Tex论文写摘要": {
147
+ "Group": "学术",
148
+ "Color": "stop",
149
+ "AsButton": False,
150
+ "Info": "读取Tex论文并写摘要 | 输入参数为路径",
151
  "Function": HotReload(读文章写摘要)
152
  },
153
+ "翻译README或MD": {
154
+ "Group": "编程",
155
  "Color": "stop",
156
+ "AsButton": True,
157
+ "Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
158
+ "Function": HotReload(Markdown英译中)
159
+ },
160
+ "翻译Markdown或README(支持Github链接)": {
161
+ "Group": "编程",
162
+ "Color": "stop",
163
+ "AsButton": False,
164
+ "Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
165
  "Function": HotReload(Markdown英译中)
166
  },
167
  "批量生成函数注释": {
168
+ "Group": "编程",
169
+ "Color": "stop",
170
  "AsButton": False, # 加入下拉菜单中
171
+ "Info": "批量生成函数的注释 | 输入参数为路径",
172
  "Function": HotReload(批量生成函数注释)
173
  },
174
  "保存当前的对话": {
175
+ "Group": "对话",
176
+ "AsButton": True,
177
+ "Info": "保存当前的对话 | 不需要输入参数",
178
  "Function": HotReload(对话历史存档)
179
  },
180
+ "[多线程Demo]解析此项目本身(源码自译解)": {
181
+ "Group": "对话|编程",
182
  "AsButton": False, # 加入下拉菜单中
183
+ "Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
184
  "Function": HotReload(解析项目本身)
185
  },
186
+ "[插件demo]历史上的今天": {
187
+ "Group": "对话",
188
+ "AsButton": True,
189
+ "Info": "查看历史上的今天事件 | 不需要输入参数",
 
 
 
190
  "Function": HotReload(高阶功能模板函数)
191
  },
192
+ "精准翻译PDF论文": {
193
+ "Group": "学术",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
194
  "Color": "stop",
195
+ "AsButton": True,
196
+ "Info": "精准翻译PDF论文为中文 | 输入参数为路径",
197
  "Function": HotReload(批量翻译PDF文档)
198
  },
199
  "询问多个GPT模型": {
200
+ "Group": "对话",
201
+ "Color": "stop",
202
+ "AsButton": True,
203
  "Function": HotReload(同时问询)
204
  },
205
+ "批量总结PDF文档": {
206
+ "Group": "学术",
207
  "Color": "stop",
208
  "AsButton": False, # 加入下拉菜单中
209
+ "Info": "批量总结PDF文档的内容 | 输入参数为路径",
210
  "Function": HotReload(批量总结PDF文档)
211
  },
 
 
 
 
 
212
  "谷歌学术检索助手(输入谷歌学术搜索页url)": {
213
+ "Group": "学术",
214
  "Color": "stop",
215
  "AsButton": False, # 加入下拉菜单中
216
+ "Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
217
  "Function": HotReload(谷歌检索小助手)
218
  },
219
  "理解PDF文档内容 (模仿ChatPDF)": {
220
+ "Group": "学术",
221
  "Color": "stop",
222
  "AsButton": False, # 加入下拉菜单中
223
+ "Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
224
  "Function": HotReload(理解PDF文档内容标准文件输入)
225
  },
226
  "英文Latex项目全文润色(输入路径或上传压缩包)": {
227
+ "Group": "学术",
228
  "Color": "stop",
229
  "AsButton": False, # 加入下拉菜单中
230
+ "Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
231
  "Function": HotReload(Latex英文润色)
232
  },
233
  "英文Latex项目全文纠错(输入路径或上传压缩包)": {
234
+ "Group": "学术",
235
  "Color": "stop",
236
  "AsButton": False, # 加入下拉菜单中
237
+ "Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
238
  "Function": HotReload(Latex英文纠错)
239
  },
240
  "中文Latex项目全文润色(输入路径或上传压缩包)": {
241
+ "Group": "学术",
242
  "Color": "stop",
243
  "AsButton": False, # 加入下拉菜单中
244
+ "Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
245
  "Function": HotReload(Latex中文润色)
246
  },
247
  "Latex项目全文中译英(输入路径或上传压缩包)": {
248
+ "Group": "学术",
249
  "Color": "stop",
250
  "AsButton": False, # 加入下拉菜单中
251
+ "Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
252
  "Function": HotReload(Latex中译英)
253
  },
254
  "Latex项目全文英译���(输入路径或上传压缩包)": {
255
+ "Group": "学术",
256
  "Color": "stop",
257
  "AsButton": False, # 加入下拉菜单中
258
+ "Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
259
  "Function": HotReload(Latex英译中)
260
  },
261
  "批量Markdown中译英(输入路径或上传压缩包)": {
262
+ "Group": "编程",
263
  "Color": "stop",
264
  "AsButton": False, # 加入下拉菜单中
265
+ "Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
266
  "Function": HotReload(Markdown中译英)
267
  },
268
+ }
269
 
270
+ # -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
 
 
 
 
 
271
  try:
272
  from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
273
  function_plugins.update({
274
  "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
275
+ "Group": "学术",
276
  "Color": "stop",
277
  "AsButton": False, # 加入下拉菜单中
278
+ # "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
279
  "Function": HotReload(下载arxiv论文并翻译摘要)
280
  }
281
  })
 
286
  from crazy_functions.联网的ChatGPT import 连接网络回答问题
287
  function_plugins.update({
288
  "连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
289
+ "Group": "对话",
290
  "Color": "stop",
291
  "AsButton": False, # 加入下拉菜单中
292
+ # "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
293
  "Function": HotReload(连接网络回答问题)
294
  }
295
  })
296
  from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
297
  function_plugins.update({
298
  "连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
299
+ "Group": "对话",
300
  "Color": "stop",
301
  "AsButton": False, # 加入下拉菜单中
302
+ "Info": "连接网络回答问题(需要访问中文Bing)| 输入参数是一个问题",
303
  "Function": HotReload(连接bing搜索回答问题)
304
  }
305
  })
 
310
  from crazy_functions.解析项目源代码 import 解析任意code项目
311
  function_plugins.update({
312
  "解析项目源代码(手动指定和筛选源代码文件类型)": {
313
+ "Group": "编程",
314
  "Color": "stop",
315
  "AsButton": False,
316
+ "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
317
+ "ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
318
  "Function": HotReload(解析任意code项目)
319
  },
320
  })
 
325
  from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
326
  function_plugins.update({
327
  "询问多个GPT模型(手动指定询问哪些模型)": {
328
+ "Group": "对话",
329
  "Color": "stop",
330
  "AsButton": False,
331
+ "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
332
+ "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
333
  "Function": HotReload(同时问询_指定模型)
334
  },
335
  })
 
340
  from crazy_functions.图片生成 import 图片生成
341
  function_plugins.update({
342
  "图片生成(先切换模型到openai或api2d)": {
343
+ "Group": "对话",
344
  "Color": "stop",
345
  "AsButton": False,
346
+ "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
347
+ "ArgsReminder": "在这里输入分辨率, 如256x256(默认)", # 高级参数输入区的显示提示
348
+ "Info": "图片生成 | 输入参数字符串,提供图像的内容",
349
  "Function": HotReload(图片生成)
350
  },
351
  })
 
356
  from crazy_functions.总结音视频 import 总结音视频
357
  function_plugins.update({
358
  "批量总结音视频(输入路径或上传压缩包)": {
359
+ "Group": "对话",
360
  "Color": "stop",
361
  "AsButton": False,
362
  "AdvancedArgs": True,
363
  "ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
364
+ "Info": "批量总结音频或视频 | 输入参数为路径",
365
  "Function": HotReload(总结音视频)
366
  }
367
  })
 
372
  from crazy_functions.数学动画生成manim import 动画生成
373
  function_plugins.update({
374
  "数学动画生成(Manim)": {
375
+ "Group": "对话",
376
  "Color": "stop",
377
  "AsButton": False,
378
+ "Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
379
  "Function": HotReload(动画生成)
380
  }
381
  })
 
386
  from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
387
  function_plugins.update({
388
  "Markdown翻译(手动指定语言)": {
389
+ "Group": "编程",
390
  "Color": "stop",
391
  "AsButton": False,
392
  "AdvancedArgs": True,
 
401
  from crazy_functions.Langchain知识库 import 知识库问答
402
  function_plugins.update({
403
  "构建知识库(请先上传文件素材)": {
404
+ "Group": "对话",
405
  "Color": "stop",
406
  "AsButton": False,
407
  "AdvancedArgs": True,
 
416
  from crazy_functions.Langchain知识库 import 读取知识库作答
417
  function_plugins.update({
418
  "知识库问答": {
419
+ "Group": "对话",
420
  "Color": "stop",
421
  "AsButton": False,
422
  "AdvancedArgs": True,
 
426
  })
427
  except:
428
  print('Load function plugin failed')
429
+
430
  try:
431
  from crazy_functions.交互功能函数模板 import 交互功能模板函数
432
  function_plugins.update({
433
  "交互功能模板函数": {
434
+ "Group": "对话",
435
  "Color": "stop",
436
  "AsButton": False,
437
  "Function": HotReload(交互功能模板函数)
 
440
  except:
441
  print('Load function plugin failed')
442
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
443
  try:
444
  from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
445
  function_plugins.update({
446
  "Latex英文纠错+高亮修正位置 [需Latex]": {
447
+ "Group": "学术",
448
  "Color": "stop",
449
  "AsButton": False,
450
  "AdvancedArgs": True,
 
455
  from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
456
  function_plugins.update({
457
  "Arixv论文精细翻译(输入arxivID)[需Latex]": {
458
+ "Group": "学术",
459
  "Color": "stop",
460
  "AsButton": False,
461
  "AdvancedArgs": True,
462
+ "ArgsReminder":
463
+ "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
464
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
465
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
466
+ "Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
467
  "Function": HotReload(Latex翻译中文并重新编译PDF)
468
  }
469
  })
470
  function_plugins.update({
471
  "本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
472
+ "Group": "学术",
473
  "Color": "stop",
474
  "AsButton": False,
475
  "AdvancedArgs": True,
476
+ "ArgsReminder":
477
+ "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
478
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
479
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
480
+ "Info": "本地Latex论文精细翻译 | 输入参数是路径",
481
  "Function": HotReload(Latex翻译中文并重新编译PDF)
482
  }
483
  })
484
  except:
485
  print('Load function plugin failed')
486
 
487
+ try:
488
+ from toolbox import get_conf
489
+ ENABLE_AUDIO, = get_conf('ENABLE_AUDIO')
490
+ if ENABLE_AUDIO:
491
+ from crazy_functions.语音助手 import 语音助手
492
+ function_plugins.update({
493
+ "实时音频采集": {
494
+ "Group": "对话",
495
+ "Color": "stop",
496
+ "AsButton": True,
497
+ "Info": "开始语言对话 | 没有输入参数",
498
+ "Function": HotReload(语音助手)
499
+ }
500
+ })
501
+ except:
502
+ print('Load function plugin failed')
503
+
504
+ try:
505
+ from crazy_functions.批量翻译PDF文档_NOUGAT import 批量翻译PDF文档
506
+ function_plugins.update({
507
+ "精准翻译PDF文档(NOUGAT)": {
508
+ "Group": "学术",
509
+ "Color": "stop",
510
+ "AsButton": False,
511
+ "Function": HotReload(批量翻译PDF文档)
512
+ }
513
+ })
514
+ except:
515
+ print('Load function plugin failed')
516
+
517
+
518
+ # try:
519
+ # from crazy_functions.CodeInterpreter import 虚空终端CodeInterpreter
520
+ # function_plugins.update({
521
+ # "CodeInterpreter(开发中,仅供测试)": {
522
+ # "Group": "编程|对话",
523
+ # "Color": "stop",
524
+ # "AsButton": False,
525
+ # "Function": HotReload(虚空终端CodeInterpreter)
526
+ # }
527
+ # })
528
+ # except:
529
+ # print('Load function plugin failed')
530
+
531
  # try:
532
+ # from crazy_functions.chatglm微调工具 import 微调数据集生成
533
  # function_plugins.update({
534
+ # "黑盒模型学习: 微调数据集生成 (先上传数据集)": {
535
  # "Color": "stop",
536
  # "AsButton": False,
537
+ # "AdvancedArgs": True,
538
+ # "ArgsReminder": "针对数据集输入(如 绿帽子*深蓝色衬衫*黑色运动裤)给出指令,例如您可以将以下命令复制到下方: --llm_to_learn=azure-gpt-3.5 --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、过去经历进行描写。要求:100字以内,用第二人称。' --system_prompt=''",
539
+ # "Function": HotReload(微调数据集生成)
540
  # }
541
  # })
542
  # except:
543
  # print('Load function plugin failed')
544
 
545
+
546
+
547
+ """
548
+ 设置默认值:
549
+ - 默认 Group = 对话
550
+ - 默认 AsButton = True
551
+ - 默认 AdvancedArgs = False
552
+ - 默认 Color = secondary
553
+ """
554
+ for name, function_meta in function_plugins.items():
555
+ if "Group" not in function_meta:
556
+ function_plugins[name]["Group"] = '对话'
557
+ if "AsButton" not in function_meta:
558
+ function_plugins[name]["AsButton"] = True
559
+ if "AdvancedArgs" not in function_meta:
560
+ function_plugins[name]["AdvancedArgs"] = False
561
+ if "Color" not in function_meta:
562
+ function_plugins[name]["Color"] = 'secondary'
563
+
564
  return function_plugins
crazy_functions/CodeInterpreter.py ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections.abc import Callable, Iterable, Mapping
2
+ from typing import Any
3
+ from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, promote_file_to_downloadzone, clear_file_downloadzone
4
+ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
5
+ from .crazy_utils import input_clipping, try_install_deps
6
+ from multiprocessing import Process, Pipe
7
+ import os
8
+ import time
9
+
10
+ templete = """
11
+ ```python
12
+ import ... # Put dependencies here, e.g. import numpy as np
13
+
14
+ class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
15
+
16
+ def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
17
+ # rewrite the function you have just written here
18
+ ...
19
+ return generated_file_path
20
+ ```
21
+ """
22
+
23
+ def inspect_dependency(chatbot, history):
24
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
25
+ return True
26
+
27
+ def get_code_block(reply):
28
+ import re
29
+ pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
30
+ matches = re.findall(pattern, reply) # find all code blocks in text
31
+ if len(matches) == 1:
32
+ return matches[0].strip('python') # code block
33
+ for match in matches:
34
+ if 'class TerminalFunction' in match:
35
+ return match.strip('python') # code block
36
+ raise RuntimeError("GPT is not generating proper code.")
37
+
38
+ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
39
+ # 输入
40
+ prompt_compose = [
41
+ f'Your job:\n'
42
+ f'1. write a single Python function, which takes a path of a `{file_type}` file as the only argument and returns a `string` containing the result of analysis or the path of generated files. \n',
43
+ f"2. You should write this function to perform following task: " + txt + "\n",
44
+ f"3. Wrap the output python function with markdown codeblock."
45
+ ]
46
+ i_say = "".join(prompt_compose)
47
+ demo = []
48
+
49
+ # 第一步
50
+ gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
51
+ inputs=i_say, inputs_show_user=i_say,
52
+ llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
53
+ sys_prompt= r"You are a programmer."
54
+ )
55
+ history.extend([i_say, gpt_say])
56
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
57
+
58
+ # 第二步
59
+ prompt_compose = [
60
+ "If previous stage is successful, rewrite the function you have just written to satisfy following templete: \n",
61
+ templete
62
+ ]
63
+ i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
64
+ gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
65
+ inputs=i_say, inputs_show_user=inputs_show_user,
66
+ llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
67
+ sys_prompt= r"You are a programmer."
68
+ )
69
+ code_to_return = gpt_say
70
+ history.extend([i_say, gpt_say])
71
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
72
+
73
+ # # 第三步
74
+ # i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
75
+ # i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
76
+ # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
77
+ # inputs=i_say, inputs_show_user=inputs_show_user,
78
+ # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
79
+ # sys_prompt= r"You are a programmer."
80
+ # )
81
+ # # # 第三步
82
+ # i_say = "Show me how to use `pip` to install packages to run the code above. "
83
+ # i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
84
+ # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
85
+ # inputs=i_say, inputs_show_user=i_say,
86
+ # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
87
+ # sys_prompt= r"You are a programmer."
88
+ # )
89
+ installation_advance = ""
90
+
91
+ return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
92
+
93
+ def make_module(code):
94
+ module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
95
+ with open(f'gpt_log/{module_file}.py', 'w', encoding='utf8') as f:
96
+ f.write(code)
97
+
98
+ def get_class_name(class_string):
99
+ import re
100
+ # Use regex to extract the class name
101
+ class_name = re.search(r'class (\w+)\(', class_string).group(1)
102
+ return class_name
103
+
104
+ class_name = get_class_name(code)
105
+ return f"gpt_log.{module_file}->{class_name}"
106
+
107
+ def init_module_instance(module):
108
+ import importlib
109
+ module_, class_ = module.split('->')
110
+ init_f = getattr(importlib.import_module(module_), class_)
111
+ return init_f()
112
+
113
+ def for_immediate_show_off_when_possible(file_type, fp, chatbot):
114
+ if file_type in ['png', 'jpg']:
115
+ image_path = os.path.abspath(fp)
116
+ chatbot.append(['这是一张图片, 展示如下:',
117
+ f'本地文件地址: <br/>`{image_path}`<br/>'+
118
+ f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
119
+ ])
120
+ return chatbot
121
+
122
+ def subprocess_worker(instance, file_path, return_dict):
123
+ return_dict['result'] = instance.run(file_path)
124
+
125
+ def have_any_recent_upload_files(chatbot):
126
+ _5min = 5 * 60
127
+ if not chatbot: return False # chatbot is None
128
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
129
+ if not most_recent_uploaded: return False # most_recent_uploaded is None
130
+ if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
131
+ else: return False # most_recent_uploaded is too old
132
+
133
+ def get_recent_file_prompt_support(chatbot):
134
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
135
+ path = most_recent_uploaded['path']
136
+ return path
137
+
138
+ @CatchException
139
+ def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
140
+ """
141
+ txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
142
+ llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
143
+ plugin_kwargs 插件模型的参数,暂时没有用武之地
144
+ chatbot 聊天显示框的句柄,用于显示给用户
145
+ history 聊天历史,前情提要
146
+ system_prompt 给gpt的静默提醒
147
+ web_port 当前软件运行的端口号
148
+ """
149
+ raise NotImplementedError
150
+
151
+ # 清空历史,以免输入溢出
152
+ history = []; clear_file_downloadzone(chatbot)
153
+
154
+ # 基本信息:功能、贡献者
155
+ chatbot.append([
156
+ "函数插件功能?",
157
+ "CodeInterpreter开源版, 此插件处于开发阶段, 建议暂时不要使用, 插件初始化中 ..."
158
+ ])
159
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
160
+
161
+ if have_any_recent_upload_files(chatbot):
162
+ file_path = get_recent_file_prompt_support(chatbot)
163
+ else:
164
+ chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
165
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
166
+
167
+ # 读取文件
168
+ if ("recently_uploaded_files" in plugin_kwargs) and (plugin_kwargs["recently_uploaded_files"] == ""): plugin_kwargs.pop("recently_uploaded_files")
169
+ recently_uploaded_files = plugin_kwargs.get("recently_uploaded_files", None)
170
+ file_path = recently_uploaded_files[-1]
171
+ file_type = file_path.split('.')[-1]
172
+
173
+ # 粗心检查
174
+ if 'private_upload' in txt:
175
+ chatbot.append([
176
+ "...",
177
+ f"请在输入框内填写需求,然后再次点击该插件(文件路径 {file_path} 已经被记忆)"
178
+ ])
179
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
180
+ return
181
+
182
+ # 开始干正事
183
+ for j in range(5): # 最多重试5次
184
+ try:
185
+ code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \
186
+ yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history)
187
+ code = get_code_block(code)
188
+ res = make_module(code)
189
+ instance = init_module_instance(res)
190
+ break
191
+ except Exception as e:
192
+ chatbot.append([f"第{j}次代码生成尝试,失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
193
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
194
+
195
+ # 代码生成结束, 开始执行
196
+ try:
197
+ import multiprocessing
198
+ manager = multiprocessing.Manager()
199
+ return_dict = manager.dict()
200
+
201
+ p = multiprocessing.Process(target=subprocess_worker, args=(instance, file_path, return_dict))
202
+ # only has 10 seconds to run
203
+ p.start(); p.join(timeout=10)
204
+ if p.is_alive(): p.terminate(); p.join()
205
+ p.close()
206
+ res = return_dict['result']
207
+ # res = instance.run(file_path)
208
+ except Exception as e:
209
+ chatbot.append(["执行失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
210
+ # chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
211
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
212
+ return
213
+
214
+ # 顺利完成,收尾
215
+ res = str(res)
216
+ if os.path.exists(res):
217
+ chatbot.append(["执行成功了,结果是一个有效文件", "结果:" + res])
218
+ new_file_path = promote_file_to_downloadzone(res, chatbot=chatbot)
219
+ chatbot = for_immediate_show_off_when_possible(file_type, new_file_path, chatbot)
220
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
221
+ else:
222
+ chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
223
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
224
+
225
+ """
226
+ 测试:
227
+ 裁剪图像,保留下半部分
228
+ 交换图像的蓝色通道和红色通道
229
+ 将图像转为灰度图像
230
+ 将csv文件转excel表格
231
+ """
crazy_functions/Latex输出PDF结果.py CHANGED
@@ -6,7 +6,7 @@ pj = os.path.join
6
  ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
7
 
8
  # =================================== 工具函数 ===============================================
9
- 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
10
  def switch_prompt(pfg, mode, more_requirement):
11
  """
12
  Generate prompts and system prompts based on the mode for proofreading or translating.
@@ -109,7 +109,7 @@ def arxiv_download(chatbot, history, txt):
109
 
110
  url_ = txt # https://arxiv.org/abs/1707.06690
111
  if not txt.startswith('https://arxiv.org/abs/'):
112
- msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
113
  yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
114
  return msg, None
115
  # <-------------- set format ------------->
@@ -255,7 +255,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
255
  project_folder = txt
256
  else:
257
  if txt == "": txt = '空空如也的输入栏'
258
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
259
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
260
  return
261
 
@@ -291,7 +291,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
291
  yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
292
  promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
293
  else:
294
- chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
295
  yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
296
  promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
297
 
 
6
  ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
7
 
8
  # =================================== 工具函数 ===============================================
9
+ # 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
10
  def switch_prompt(pfg, mode, more_requirement):
11
  """
12
  Generate prompts and system prompts based on the mode for proofreading or translating.
 
109
 
110
  url_ = txt # https://arxiv.org/abs/1707.06690
111
  if not txt.startswith('https://arxiv.org/abs/'):
112
+ msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
113
  yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
114
  return msg, None
115
  # <-------------- set format ------------->
 
255
  project_folder = txt
256
  else:
257
  if txt == "": txt = '空空如也的输入栏'
258
+ report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无法处理: {txt}")
259
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
260
  return
261
 
 
291
  yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
292
  promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
293
  else:
294
+ chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
295
  yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
296
  promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
297
 
crazy_functions/crazy_utils.py CHANGED
@@ -591,11 +591,16 @@ def get_files_from_everything(txt, type): # type='.md'
591
  # 网络的远程文件
592
  import requests
593
  from toolbox import get_conf
 
594
  proxies, = get_conf('proxies')
595
- r = requests.get(txt, proxies=proxies)
596
- with open('./gpt_log/temp'+type, 'wb+') as f: f.write(r.content)
597
- project_folder = './gpt_log/'
598
- file_manifest = ['./gpt_log/temp'+type]
 
 
 
 
599
  elif txt.endswith(type):
600
  # 直接给定文件
601
  file_manifest = [txt]
 
591
  # 网络的远程文件
592
  import requests
593
  from toolbox import get_conf
594
+ from toolbox import get_log_folder, gen_time_str
595
  proxies, = get_conf('proxies')
596
+ try:
597
+ r = requests.get(txt, proxies=proxies)
598
+ except:
599
+ raise ConnectionRefusedError(f"无法下载资源{txt},请检查。")
600
+ path = os.path.join(get_log_folder(plugin_name='web_download'), gen_time_str()+type)
601
+ with open(path, 'wb+') as f: f.write(r.content)
602
+ project_folder = get_log_folder(plugin_name='web_download')
603
+ file_manifest = [path]
604
  elif txt.endswith(type):
605
  # 直接给定文件
606
  file_manifest = [txt]
crazy_functions/json_fns/pydantic_io.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/model_io/output_parsers/pydantic.ipynb
3
+
4
+ Example 1.
5
+
6
+ # Define your desired data structure.
7
+ class Joke(BaseModel):
8
+ setup: str = Field(description="question to set up a joke")
9
+ punchline: str = Field(description="answer to resolve the joke")
10
+
11
+ # You can add custom validation logic easily with Pydantic.
12
+ @validator("setup")
13
+ def question_ends_with_question_mark(cls, field):
14
+ if field[-1] != "?":
15
+ raise ValueError("Badly formed question!")
16
+ return field
17
+
18
+
19
+ Example 2.
20
+
21
+ # Here's another example, but with a compound typed field.
22
+ class Actor(BaseModel):
23
+ name: str = Field(description="name of an actor")
24
+ film_names: List[str] = Field(description="list of names of films they starred in")
25
+ """
26
+
27
+ import json, re, logging
28
+
29
+
30
+ PYDANTIC_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
31
+
32
+ As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
33
+ the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
34
+
35
+ Here is the output schema:
36
+ ```
37
+ {schema}
38
+ ```"""
39
+
40
+
41
+ PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
42
+ ```
43
+ {schema}
44
+ ```"""
45
+
46
+ class JsonStringError(Exception): ...
47
+
48
+ class GptJsonIO():
49
+
50
+ def __init__(self, schema, example_instruction=True):
51
+ self.pydantic_object = schema
52
+ self.example_instruction = example_instruction
53
+ self.format_instructions = self.generate_format_instructions()
54
+
55
+ def generate_format_instructions(self):
56
+ schema = self.pydantic_object.schema()
57
+
58
+ # Remove extraneous fields.
59
+ reduced_schema = schema
60
+ if "title" in reduced_schema:
61
+ del reduced_schema["title"]
62
+ if "type" in reduced_schema:
63
+ del reduced_schema["type"]
64
+ # Ensure json in context is well-formed with double quotes.
65
+ if self.example_instruction:
66
+ schema_str = json.dumps(reduced_schema)
67
+ return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
68
+ else:
69
+ return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str)
70
+
71
+ def generate_output(self, text):
72
+ # Greedy search for 1st json candidate.
73
+ match = re.search(
74
+ r"\{.*\}", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL
75
+ )
76
+ json_str = ""
77
+ if match: json_str = match.group()
78
+ json_object = json.loads(json_str, strict=False)
79
+ final_object = self.pydantic_object.parse_obj(json_object)
80
+ return final_object
81
+
82
+ def generate_repair_prompt(self, broken_json, error):
83
+ prompt = "Fix a broken json string.\n\n" + \
84
+ "(1) The broken json string need to fix is: \n\n" + \
85
+ "```" + "\n" + \
86
+ broken_json + "\n" + \
87
+ "```" + "\n\n" + \
88
+ "(2) The error message is: \n\n" + \
89
+ error + "\n\n" + \
90
+ "Now, fix this json string. \n\n"
91
+ return prompt
92
+
93
+ def generate_output_auto_repair(self, response, gpt_gen_fn):
94
+ """
95
+ response: string containing canidate json
96
+ gpt_gen_fn: gpt_gen_fn(inputs, sys_prompt)
97
+ """
98
+ try:
99
+ result = self.generate_output(response)
100
+ except Exception as e:
101
+ try:
102
+ logging.info(f'Repairing json:{response}')
103
+ repair_prompt = self.generate_repair_prompt(broken_json = response, error=repr(e))
104
+ result = self.generate_output(gpt_gen_fn(repair_prompt, self.format_instructions))
105
+ logging.info('Repaire json success.')
106
+ except Exception as e:
107
+ # 没辙了,放弃治疗
108
+ logging.info('Repaire json fail.')
109
+ raise JsonStringError('Cannot repair json.', str(e))
110
+ return result
111
+
crazy_functions/live_audio/aliyunASR.py CHANGED
@@ -1,4 +1,4 @@
1
- import time, threading, json
2
 
3
 
4
  class AliyunASR():
@@ -12,14 +12,14 @@ class AliyunASR():
12
  message = json.loads(message)
13
  self.parsed_sentence = message['payload']['result']
14
  self.event_on_entence_end.set()
15
- print(self.parsed_sentence)
16
 
17
  def test_on_start(self, message, *args):
18
  # print("test_on_start:{}".format(message))
19
  pass
20
 
21
  def test_on_error(self, message, *args):
22
- print("on_error args=>{}".format(args))
23
  pass
24
 
25
  def test_on_close(self, *args):
@@ -36,7 +36,6 @@ class AliyunASR():
36
  # print("on_completed:args=>{} message=>{}".format(args, message))
37
  pass
38
 
39
-
40
  def audio_convertion_thread(self, uuid):
41
  # 在一个异步线程中采集音频
42
  import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
 
1
+ import time, logging, json
2
 
3
 
4
  class AliyunASR():
 
12
  message = json.loads(message)
13
  self.parsed_sentence = message['payload']['result']
14
  self.event_on_entence_end.set()
15
+ # print(self.parsed_sentence)
16
 
17
  def test_on_start(self, message, *args):
18
  # print("test_on_start:{}".format(message))
19
  pass
20
 
21
  def test_on_error(self, message, *args):
22
+ logging.error("on_error args=>{}".format(args))
23
  pass
24
 
25
  def test_on_close(self, *args):
 
36
  # print("on_completed:args=>{} message=>{}".format(args, message))
37
  pass
38
 
 
39
  def audio_convertion_thread(self, uuid):
40
  # 在一个异步线程中采集音频
41
  import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
crazy_functions/pdf_fns/parse_pdf.py CHANGED
@@ -20,6 +20,11 @@ def get_avail_grobid_url():
20
  def parse_pdf(pdf_path, grobid_url):
21
  import scipdf # pip install scipdf_parser
22
  if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/')
23
- article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url)
 
 
 
 
 
24
  return article_dict
25
 
 
20
  def parse_pdf(pdf_path, grobid_url):
21
  import scipdf # pip install scipdf_parser
22
  if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/')
23
+ try:
24
+ article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url)
25
+ except GROBID_OFFLINE_EXCEPTION:
26
+ raise GROBID_OFFLINE_EXCEPTION("GROBID服务不可用,请修改config中的GROBID_URL,可修改成本地GROBID服务。")
27
+ except:
28
+ raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
29
  return article_dict
30
 
crazy_functions/vt_fns/vt_call_plugin.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel, Field
2
+ from typing import List
3
+ from toolbox import update_ui_lastest_msg, disable_auto_promotion
4
+ from request_llm.bridge_all import predict_no_ui_long_connection
5
+ from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
6
+ import copy, json, pickle, os, sys, time
7
+
8
+
9
+ def read_avail_plugin_enum():
10
+ from crazy_functional import get_crazy_functions
11
+ plugin_arr = get_crazy_functions()
12
+ # remove plugins with out explaination
13
+ plugin_arr = {k:v for k, v in plugin_arr.items() if 'Info' in v}
14
+ plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
15
+ plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
16
+ plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
17
+ plugin_arr_dict_parse.update({f"F_{i}":v for i, v in enumerate(plugin_arr.values(), start=1)})
18
+ prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2)
19
+ prompt = "\n\nThe defination of PluginEnum:\nPluginEnum=" + prompt
20
+ return prompt, plugin_arr_dict, plugin_arr_dict_parse
21
+
22
+ def wrap_code(txt):
23
+ txt = txt.replace('```','')
24
+ return f"\n```\n{txt}\n```\n"
25
+
26
+ def have_any_recent_upload_files(chatbot):
27
+ _5min = 5 * 60
28
+ if not chatbot: return False # chatbot is None
29
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
30
+ if not most_recent_uploaded: return False # most_recent_uploaded is None
31
+ if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
32
+ else: return False # most_recent_uploaded is too old
33
+
34
+ def get_recent_file_prompt_support(chatbot):
35
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
36
+ path = most_recent_uploaded['path']
37
+ prompt = "\nAdditional Information:\n"
38
+ prompt = "In case that this plugin requires a path or a file as argument,"
39
+ prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`"
40
+ prompt += f"Only use it when necessary, otherwise, you can ignore this file."
41
+ return prompt
42
+
43
+ def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
44
+ # remove plugin_arr_enum_prompt from inputs string
45
+ inputs_show_user = inputs.replace(plugin_arr_enum_prompt, "")
46
+ inputs_show_user += plugin_arr_enum_prompt[:200] + '...'
47
+ inputs_show_user += '\n...\n'
48
+ inputs_show_user += '...\n'
49
+ inputs_show_user += '...}'
50
+ return inputs_show_user
51
+
52
+ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
53
+ plugin_arr_enum_prompt, plugin_arr_dict, plugin_arr_dict_parse = read_avail_plugin_enum()
54
+ class Plugin(BaseModel):
55
+ plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F_0000")
56
+ reason_of_selection: str = Field(description="The reason why you should select this plugin.", default="This plugin satisfy user requirement most")
57
+ # ⭐ ⭐ ⭐ 选择插件
58
+ yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0)
59
+ gpt_json_io = GptJsonIO(Plugin)
60
+ gpt_json_io.format_instructions = "The format of your output should be a json that can be parsed by json.loads.\n"
61
+ gpt_json_io.format_instructions += """Output example: {"plugin_selection":"F_1234", "reason_of_selection":"F_1234 plugin satisfy user requirement most"}\n"""
62
+ gpt_json_io.format_instructions += "The plugins you are authorized to use are listed below:\n"
63
+ gpt_json_io.format_instructions += plugin_arr_enum_prompt
64
+ inputs = "Choose the correct plugin according to user requirements, the user requirement is: \n\n" + \
65
+ ">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
66
+
67
+ run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
68
+ inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
69
+ try:
70
+ gpt_reply = run_gpt_fn(inputs, "")
71
+ plugin_sel = gpt_json_io.generate_output_auto_repair(gpt_reply, run_gpt_fn)
72
+ except JsonStringError:
73
+ msg = f"抱歉, {llm_kwargs['llm_model']}无法理解您的需求。"
74
+ msg += "请求的Prompt为:\n" + wrap_code(get_inputs_show_user(inputs, plugin_arr_enum_prompt))
75
+ msg += "语言模型回复为:\n" + wrap_code(gpt_reply)
76
+ msg += "\n但您可以尝试再试一次\n"
77
+ yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
78
+ return
79
+ if plugin_sel.plugin_selection not in plugin_arr_dict_parse:
80
+ msg = f"抱歉, 找不到合适插件执行该任务, 或者{llm_kwargs['llm_model']}无法理解您的需求。"
81
+ msg += f"语言模型{llm_kwargs['llm_model']}选择了不存在的插件:\n" + wrap_code(gpt_reply)
82
+ msg += "\n但您可以尝试再试一次\n"
83
+ yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
84
+ return
85
+
86
+ # ⭐ ⭐ ⭐ 确认插件参数
87
+ if not have_any_recent_upload_files(chatbot):
88
+ appendix_info = ""
89
+ else:
90
+ appendix_info = get_recent_file_prompt_support(chatbot)
91
+
92
+ plugin = plugin_arr_dict_parse[plugin_sel.plugin_selection]
93
+ yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0)
94
+ class PluginExplicit(BaseModel):
95
+ plugin_selection: str = plugin_sel.plugin_selection
96
+ plugin_arg: str = Field(description="The argument of the plugin.", default="")
97
+ gpt_json_io = GptJsonIO(PluginExplicit)
98
+ gpt_json_io.format_instructions += "The information about this plugin is:" + plugin["Info"]
99
+ inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \
100
+ "you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \
101
+ ">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
102
+ gpt_json_io.format_instructions
103
+ run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
104
+ inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
105
+ plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
106
+
107
+
108
+ # ⭐ ⭐ ⭐ 执行插件
109
+ fn = plugin['Function']
110
+ fn_name = fn.__name__
111
+ msg = f'{llm_kwargs["llm_model"]}为您选择了插件: `{fn_name}`\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}\n\n假如偏离了您的要求,按停止键终止。'
112
+ yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
113
+ yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1)
114
+ return
crazy_functions/vt_fns/vt_modify_config.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel, Field
2
+ from typing import List
3
+ from toolbox import update_ui_lastest_msg, get_conf
4
+ from request_llm.bridge_all import predict_no_ui_long_connection
5
+ from crazy_functions.json_fns.pydantic_io import GptJsonIO
6
+ import copy, json, pickle, os, sys
7
+
8
+
9
+ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
10
+ ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
11
+ if not ALLOW_RESET_CONFIG:
12
+ yield from update_ui_lastest_msg(
13
+ lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
14
+ chatbot=chatbot, history=history, delay=2
15
+ )
16
+ return
17
+
18
+ # ⭐ ⭐ ⭐ 读取可配置项目条目
19
+ names = {}
20
+ from enum import Enum
21
+ import config
22
+ for k, v in config.__dict__.items():
23
+ if k.startswith('__'): continue
24
+ names.update({k:k})
25
+ # if len(names) > 20: break # 限制最多前10个配置项,如果太多了会导致gpt无法理解
26
+
27
+ ConfigOptions = Enum('ConfigOptions', names)
28
+ class ModifyConfigurationIntention(BaseModel):
29
+ which_config_to_modify: ConfigOptions = Field(description="the name of the configuration to modify, you must choose from one of the ConfigOptions enum.", default=None)
30
+ new_option_value: str = Field(description="the new value of the option", default=None)
31
+
32
+ # ⭐ ⭐ ⭐ 分析用户意图
33
+ yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n读取新配置中", chatbot=chatbot, history=history, delay=0)
34
+ gpt_json_io = GptJsonIO(ModifyConfigurationIntention)
35
+ inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
36
+ ">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
37
+ gpt_json_io.format_instructions
38
+
39
+ run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
40
+ inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
41
+ user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
42
+
43
+ explicit_conf = user_intention.which_config_to_modify.value
44
+
45
+ ok = (explicit_conf in txt)
46
+ if ok:
47
+ yield from update_ui_lastest_msg(
48
+ lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
49
+ chatbot=chatbot, history=history, delay=1
50
+ )
51
+ yield from update_ui_lastest_msg(
52
+ lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
53
+ chatbot=chatbot, history=history, delay=2
54
+ )
55
+
56
+ # ⭐ ⭐ ⭐ 立即应用配置
57
+ from toolbox import set_conf
58
+ set_conf(explicit_conf, user_intention.new_option_value)
59
+
60
+ yield from update_ui_lastest_msg(
61
+ lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,重新页面即可生效。", chatbot=chatbot, history=history, delay=1
62
+ )
63
+ else:
64
+ yield from update_ui_lastest_msg(
65
+ lastmsg=f"失败,如果需要配置{explicit_conf},您需要明确说明并在指令中提到它。", chatbot=chatbot, history=history, delay=5
66
+ )
67
+
68
+ def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
69
+ ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
70
+ if not ALLOW_RESET_CONFIG:
71
+ yield from update_ui_lastest_msg(
72
+ lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
73
+ chatbot=chatbot, history=history, delay=2
74
+ )
75
+ return
76
+
77
+ yield from modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
78
+ yield from update_ui_lastest_msg(
79
+ lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,五秒后即将重启!若出现报错请无视即可。", chatbot=chatbot, history=history, delay=5
80
+ )
81
+ os.execl(sys.executable, sys.executable, *sys.argv)
crazy_functions/vt_fns/vt_state.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pickle
2
+
3
+ class VoidTerminalState():
4
+ def __init__(self):
5
+ self.reset_state()
6
+
7
+ def reset_state(self):
8
+ self.has_provided_explaination = False
9
+
10
+ def lock_plugin(self, chatbot):
11
+ chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
12
+ chatbot._cookies['plugin_state'] = pickle.dumps(self)
13
+
14
+ def unlock_plugin(self, chatbot):
15
+ self.reset_state()
16
+ chatbot._cookies['lock_plugin'] = None
17
+ chatbot._cookies['plugin_state'] = pickle.dumps(self)
18
+
19
+ def set_state(self, chatbot, key, value):
20
+ setattr(self, key, value)
21
+ chatbot._cookies['plugin_state'] = pickle.dumps(self)
22
+
23
+ def get_state(chatbot):
24
+ state = chatbot._cookies.get('plugin_state', None)
25
+ if state is not None: state = pickle.loads(state)
26
+ else: state = VoidTerminalState()
27
+ state.chatbot = chatbot
28
+ return state
crazy_functions/批量Markdown翻译.py CHANGED
@@ -145,6 +145,8 @@ def get_files_from_everything(txt, preference=''):
145
  project_folder = txt
146
  file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
147
  else:
 
 
148
  success = False
149
 
150
  return success, file_manifest, project_folder
 
145
  project_folder = txt
146
  file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
147
  else:
148
+ project_folder = None
149
+ file_manifest = []
150
  success = False
151
 
152
  return success, file_manifest, project_folder
crazy_functions/批量翻译PDF文档_NOUGAT.py ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from toolbox import CatchException, report_execption, gen_time_str
2
+ from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
3
+ from toolbox import write_history_to_file, get_log_folder
4
+ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
5
+ from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
6
+ from .crazy_utils import read_and_clean_pdf_text
7
+ from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url
8
+ from colorful import *
9
+ import os
10
+ import math
11
+ import logging
12
+
13
+ def markdown_to_dict(article_content):
14
+ import markdown
15
+ from bs4 import BeautifulSoup
16
+ cur_t = ""
17
+ cur_c = ""
18
+ results = {}
19
+ for line in article_content:
20
+ if line.startswith('#'):
21
+ if cur_t!="":
22
+ if cur_t not in results:
23
+ results.update({cur_t:cur_c.lstrip('\n')})
24
+ else:
25
+ # 处理重名的章节
26
+ results.update({cur_t + " " + gen_time_str():cur_c.lstrip('\n')})
27
+ cur_t = line.rstrip('\n')
28
+ cur_c = ""
29
+ else:
30
+ cur_c += line
31
+ results_final = {}
32
+ for k in list(results.keys()):
33
+ if k.startswith('# '):
34
+ results_final['title'] = k.split('# ')[-1]
35
+ results_final['authors'] = results.pop(k).lstrip('\n')
36
+ if k.startswith('###### Abstract'):
37
+ results_final['abstract'] = results.pop(k).lstrip('\n')
38
+
39
+ results_final_sections = []
40
+ for k,v in results.items():
41
+ results_final_sections.append({
42
+ 'heading':k.lstrip("# "),
43
+ 'text':v if len(v) > 0 else f"The beginning of {k.lstrip('# ')} section."
44
+ })
45
+ results_final['sections'] = results_final_sections
46
+ return results_final
47
+
48
+
49
+ @CatchException
50
+ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
51
+
52
+ disable_auto_promotion(chatbot)
53
+ # 基本信息:功能、贡献者
54
+ chatbot.append([
55
+ "函数插件功能?",
56
+ "批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
57
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
58
+
59
+ # 尝试导入依赖,如果缺少依赖,则给出安装建议
60
+ try:
61
+ import nougat
62
+ import tiktoken
63
+ except:
64
+ report_execption(chatbot, history,
65
+ a=f"解析项目: {txt}",
66
+ b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade nougat-ocr tiktoken```。")
67
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
68
+ return
69
+
70
+ # 清空历史,以免输入溢出
71
+ history = []
72
+
73
+ from .crazy_utils import get_files_from_everything
74
+ success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
75
+ # 检测输入参数,如没有给定输入参数,直接退出
76
+ if not success:
77
+ if txt == "": txt = '空空如也的输入栏'
78
+
79
+ # 如果没找到任何文件
80
+ if len(file_manifest) == 0:
81
+ report_execption(chatbot, history,
82
+ a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}")
83
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
84
+ return
85
+
86
+ # 开始正式执行任务
87
+ yield from 解析PDF_基于NOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
88
+
89
+
90
+ def nougat_with_timeout(command, cwd, timeout=3600):
91
+ import subprocess
92
+ process = subprocess.Popen(command, shell=True, cwd=cwd)
93
+ try:
94
+ stdout, stderr = process.communicate(timeout=timeout)
95
+ except subprocess.TimeoutExpired:
96
+ process.kill()
97
+ stdout, stderr = process.communicate()
98
+ print("Process timed out!")
99
+ return False
100
+ return True
101
+
102
+
103
+ def NOUGAT_parse_pdf(fp):
104
+ import glob
105
+ from toolbox import get_log_folder, gen_time_str
106
+ dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str())
107
+ os.makedirs(dst)
108
+ nougat_with_timeout(f'nougat --out "{os.path.abspath(dst)}" "{os.path.abspath(fp)}"', os.getcwd())
109
+ res = glob.glob(os.path.join(dst,'*.mmd'))
110
+ if len(res) == 0:
111
+ raise RuntimeError("Nougat解析论文失败。")
112
+ return res[0]
113
+
114
+
115
+ def 解析PDF_基于NOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
116
+ import copy
117
+ import tiktoken
118
+ TOKEN_LIMIT_PER_FRAGMENT = 1280
119
+ generated_conclusion_files = []
120
+ generated_html_files = []
121
+ DST_LANG = "中文"
122
+ for index, fp in enumerate(file_manifest):
123
+ chatbot.append(["当前进度:", f"正在解析论文,请稍候。(第一次运行时,需要花费较长时间下载NOUGAT参数)"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
124
+ fpp = NOUGAT_parse_pdf(fp)
125
+
126
+ with open(fpp, 'r', encoding='utf8') as f:
127
+ article_content = f.readlines()
128
+ article_dict = markdown_to_dict(article_content)
129
+ logging.info(article_dict)
130
+
131
+ prompt = "以下是一篇学术论文的基本信息:\n"
132
+ # title
133
+ title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
134
+ # authors
135
+ authors = article_dict.get('authors', '无法获取 authors'); prompt += f'authors:{authors}\n\n'
136
+ # abstract
137
+ abstract = article_dict.get('abstract', '无法获取 abstract'); prompt += f'abstract:{abstract}\n\n'
138
+ # command
139
+ prompt += f"请将题目和摘要翻译为{DST_LANG}。"
140
+ meta = [f'# Title:\n\n', title, f'# Abstract:\n\n', abstract ]
141
+
142
+ # 单线,获取文章meta信息
143
+ paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
144
+ inputs=prompt,
145
+ inputs_show_user=prompt,
146
+ llm_kwargs=llm_kwargs,
147
+ chatbot=chatbot, history=[],
148
+ sys_prompt="You are an academic paper reader。",
149
+ )
150
+
151
+ # 多线,翻译
152
+ inputs_array = []
153
+ inputs_show_user_array = []
154
+
155
+ # get_token_num
156
+ from request_llm.bridge_all import model_info
157
+ enc = model_info[llm_kwargs['llm_model']]['tokenizer']
158
+ def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
159
+ from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
160
+
161
+ def break_down(txt):
162
+ raw_token_num = get_token_num(txt)
163
+ if raw_token_num <= TOKEN_LIMIT_PER_FRAGMENT:
164
+ return [txt]
165
+ else:
166
+ # raw_token_num > TOKEN_LIMIT_PER_FRAGMENT
167
+ # find a smooth token limit to achieve even seperation
168
+ count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
169
+ token_limit_smooth = raw_token_num // count + count
170
+ return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth)
171
+
172
+ for section in article_dict.get('sections'):
173
+ if len(section['text']) == 0: continue
174
+ section_frags = break_down(section['text'])
175
+ for i, fragment in enumerate(section_frags):
176
+ heading = section['heading']
177
+ if len(section_frags) > 1: heading += f' Part-{i+1}'
178
+ inputs_array.append(
179
+ f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
180
+ )
181
+ inputs_show_user_array.append(
182
+ f"# {heading}\n\n{fragment}"
183
+ )
184
+
185
+ gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
186
+ inputs_array=inputs_array,
187
+ inputs_show_user_array=inputs_show_user_array,
188
+ llm_kwargs=llm_kwargs,
189
+ chatbot=chatbot,
190
+ history_array=[meta for _ in inputs_array],
191
+ sys_prompt_array=[
192
+ "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in inputs_array],
193
+ )
194
+ res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + gpt_response_collection, file_basename=None, file_fullname=None)
195
+ promote_file_to_downloadzone(res_path, rename_file=os.path.basename(fp)+'.md', chatbot=chatbot)
196
+ generated_conclusion_files.append(res_path)
197
+
198
+ ch = construct_html()
199
+ orig = ""
200
+ trans = ""
201
+ gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
202
+ for i,k in enumerate(gpt_response_collection_html):
203
+ if i%2==0:
204
+ gpt_response_collection_html[i] = inputs_show_user_array[i//2]
205
+ else:
206
+ gpt_response_collection_html[i] = gpt_response_collection_html[i]
207
+
208
+ final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
209
+ final.extend(gpt_response_collection_html)
210
+ for i, k in enumerate(final):
211
+ if i%2==0:
212
+ orig = k
213
+ if i%2==1:
214
+ trans = k
215
+ ch.add_row(a=orig, b=trans)
216
+ create_report_file_name = f"{os.path.basename(fp)}.trans.html"
217
+ html_file = ch.save_file(create_report_file_name)
218
+ generated_html_files.append(html_file)
219
+ promote_file_to_downloadzone(html_file, rename_file=os.path.basename(html_file), chatbot=chatbot)
220
+
221
+ chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
222
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
223
+
224
+
225
+
226
+ class construct_html():
227
+ def __init__(self) -> None:
228
+ self.css = """
229
+ .row {
230
+ display: flex;
231
+ flex-wrap: wrap;
232
+ }
233
+
234
+ .column {
235
+ flex: 1;
236
+ padding: 10px;
237
+ }
238
+
239
+ .table-header {
240
+ font-weight: bold;
241
+ border-bottom: 1px solid black;
242
+ }
243
+
244
+ .table-row {
245
+ border-bottom: 1px solid lightgray;
246
+ }
247
+
248
+ .table-cell {
249
+ padding: 5px;
250
+ }
251
+ """
252
+ self.html_string = f'<!DOCTYPE html><head><meta charset="utf-8"><title>翻译结果</title><style>{self.css}</style></head>'
253
+
254
+
255
+ def add_row(self, a, b):
256
+ tmp = """
257
+ <div class="row table-row">
258
+ <div class="column table-cell">REPLACE_A</div>
259
+ <div class="column table-cell">REPLACE_B</div>
260
+ </div>
261
+ """
262
+ from toolbox import markdown_convertion
263
+ tmp = tmp.replace('REPLACE_A', markdown_convertion(a))
264
+ tmp = tmp.replace('REPLACE_B', markdown_convertion(b))
265
+ self.html_string += tmp
266
+
267
+
268
+ def save_file(self, file_name):
269
+ with open(os.path.join(get_log_folder(), file_name), 'w', encoding='utf8') as f:
270
+ f.write(self.html_string.encode('utf-8', 'ignore').decode())
271
+ return os.path.join(get_log_folder(), file_name)
crazy_functions/批量翻译PDF文档_多线程.py CHANGED
@@ -24,10 +24,11 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
24
  try:
25
  import fitz
26
  import tiktoken
 
27
  except:
28
  report_execption(chatbot, history,
29
  a=f"解析项目: {txt}",
30
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken```。")
31
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
32
  return
33
 
@@ -58,7 +59,6 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
58
 
59
  def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
60
  import copy
61
- import tiktoken
62
  TOKEN_LIMIT_PER_FRAGMENT = 1280
63
  generated_conclusion_files = []
64
  generated_html_files = []
@@ -66,7 +66,7 @@ def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwa
66
  for index, fp in enumerate(file_manifest):
67
  chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
68
  article_dict = parse_pdf(fp, grobid_url)
69
- print(article_dict)
70
  prompt = "以下是一篇学术论文的基本信息:\n"
71
  # title
72
  title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
@@ -113,7 +113,7 @@ def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwa
113
  section_frags = break_down(section['text'])
114
  for i, fragment in enumerate(section_frags):
115
  heading = section['heading']
116
- if len(section_frags) > 1: heading += f'Part-{i+1}'
117
  inputs_array.append(
118
  f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
119
  )
 
24
  try:
25
  import fitz
26
  import tiktoken
27
+ import scipdf
28
  except:
29
  report_execption(chatbot, history,
30
  a=f"解析项目: {txt}",
31
+ b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken scipdf_parser```。")
32
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
33
  return
34
 
 
59
 
60
  def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
61
  import copy
 
62
  TOKEN_LIMIT_PER_FRAGMENT = 1280
63
  generated_conclusion_files = []
64
  generated_html_files = []
 
66
  for index, fp in enumerate(file_manifest):
67
  chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
68
  article_dict = parse_pdf(fp, grobid_url)
69
+ if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
70
  prompt = "以下是一篇学术论文的基本信息:\n"
71
  # title
72
  title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
 
113
  section_frags = break_down(section['text'])
114
  for i, fragment in enumerate(section_frags):
115
  heading = section['heading']
116
+ if len(section_frags) > 1: heading += f' Part-{i+1}'
117
  inputs_array.append(
118
  f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
119
  )
crazy_functions/联网的ChatGPT.py CHANGED
@@ -75,7 +75,11 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
75
  proxies, = get_conf('proxies')
76
  urls = google(txt, proxies)
77
  history = []
78
-
 
 
 
 
79
  # ------------- < 第2步:依次访问网页 > -------------
80
  max_search_result = 5 # 最多收纳多少个网页的结果
81
  for index, url in enumerate(urls[:max_search_result]):
 
75
  proxies, = get_conf('proxies')
76
  urls = google(txt, proxies)
77
  history = []
78
+ if len(urls) == 0:
79
+ chatbot.append((f"结论:{txt}",
80
+ "[Local Message] 受到google限制,无法从google获取信息!"))
81
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
82
+ return
83
  # ------------- < 第2步:依次访问网页 > -------------
84
  max_search_result = 5 # 最多收纳多少个网页的结果
85
  for index, url in enumerate(urls[:max_search_result]):
crazy_functions/联网的ChatGPT_bing版.py CHANGED
@@ -75,7 +75,11 @@ def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, histor
75
  proxies, = get_conf('proxies')
76
  urls = bing_search(txt, proxies)
77
  history = []
78
-
 
 
 
 
79
  # ------------- < 第2步:依次访问网页 > -------------
80
  max_search_result = 8 # 最多收纳多少个网页的结果
81
  for index, url in enumerate(urls[:max_search_result]):
 
75
  proxies, = get_conf('proxies')
76
  urls = bing_search(txt, proxies)
77
  history = []
78
+ if len(urls) == 0:
79
+ chatbot.append((f"结论:{txt}",
80
+ "[Local Message] 受到bing限制,无法从bing获取信息!"))
81
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
82
+ return
83
  # ------------- < 第2步:依次访问网页 > -------------
84
  max_search_result = 8 # 最多收纳多少个网页的结果
85
  for index, url in enumerate(urls[:max_search_result]):
crazy_functions/虚空终端.py CHANGED
@@ -1,119 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  from toolbox import CatchException, update_ui, gen_time_str
2
- from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
3
- from .crazy_utils import input_clipping
4
- import copy, json
5
-
6
- def get_fn_lib():
7
- return {
8
- "BatchTranslatePDFDocuments_MultiThreaded": {
9
- "module": "crazy_functions.批量翻译PDF文档_多线程",
10
- "function": "批量翻译PDF文档",
11
- "description": "Translate PDF Documents",
12
- "arg_1_description": "A path containing pdf files.",
13
- },
14
- "SummarizingWordDocuments": {
15
- "module": "crazy_functions.总结word文档",
16
- "function": "总结word文档",
17
- "description": "Summarize Word Documents",
18
- "arg_1_description": "A path containing Word files.",
19
- },
20
- "ImageGeneration": {
21
- "module": "crazy_functions.图片生成",
22
- "function": "图片生成",
23
- "description": "Generate a image that satisfies some description.",
24
- "arg_1_description": "Descriptions about the image to be generated.",
25
- },
26
- "TranslateMarkdownFromEnglishToChinese": {
27
- "module": "crazy_functions.批量Markdown翻译",
28
- "function": "Markdown中译英",
29
- "description": "Translate Markdown Documents from English to Chinese.",
30
- "arg_1_description": "A path containing Markdown files.",
31
- },
32
- "SummaryAudioVideo": {
33
- "module": "crazy_functions.总结音视频",
34
- "function": "总结音视频",
35
- "description": "Get text from a piece of audio and summarize this audio.",
36
- "arg_1_description": "A path containing audio files.",
37
- },
38
- }
39
-
40
- functions = [
41
- {
42
- "name": k,
43
- "description": v['description'],
44
- "parameters": {
45
- "type": "object",
46
- "properties": {
47
- "plugin_arg_1": {
48
- "type": "string",
49
- "description": v['arg_1_description'],
50
- },
51
- },
52
- "required": ["plugin_arg_1"],
53
- },
54
- } for k, v in get_fn_lib().items()
55
- ]
56
-
57
- def inspect_dependency(chatbot, history):
58
- return True
59
-
60
- def eval_code(code, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
61
- import importlib
62
- try:
63
- tmp = get_fn_lib()[code['name']]
64
- fp, fn = tmp['module'], tmp['function']
65
- fn_plugin = getattr(importlib.import_module(fp, fn), fn)
66
- arg = json.loads(code['arguments'])['plugin_arg_1']
67
- yield from fn_plugin(arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
68
- except:
69
- from toolbox import trimmed_format_exc
70
- chatbot.append(["执行错误", f"\n```\n{trimmed_format_exc()}\n```\n"])
71
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
72
-
73
- def get_code_block(reply):
74
- import re
75
- pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
76
- matches = re.findall(pattern, reply) # find all code blocks in text
77
- if len(matches) != 1:
78
- raise RuntimeError("GPT is not generating proper code.")
79
- return matches[0].strip('python') # code block
80
 
81
- @CatchException
82
- def 终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
83
- """
84
- txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
85
- llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
86
- plugin_kwargs 插件模型的参数, 暂时没有用武之地
87
- chatbot 聊天显示框的句柄, 用于显示给用户
88
- history 聊天历史, 前情提要
89
- system_prompt 给gpt的静默提醒
90
- web_port 当前软件运行的端口号
91
- """
92
- # 清空历史, 以免输入溢出
93
- history = []
94
-
95
- # 基本信息:功能、贡献者
96
- chatbot.append(["虚空终端插件的功能?", "根据自然语言的描述, 执行任意插件的命令."])
97
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
98
-
99
- # 输入
100
- i_say = txt
101
- # 开始
102
- llm_kwargs_function_call = copy.deepcopy(llm_kwargs)
103
- llm_kwargs_function_call['llm_model'] = 'gpt-call-fn' # 修改调用函数
104
  gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
105
- inputs=i_say, inputs_show_user=txt,
106
- llm_kwargs=llm_kwargs_function_call, chatbot=chatbot, history=[],
107
- sys_prompt=functions
108
  )
 
 
 
 
 
 
 
 
 
 
 
109
 
110
- # 将代码转为动画
111
- res = json.loads(gpt_say)['choices'][0]
112
- if res['finish_reason'] == 'function_call':
113
- code = json.loads(gpt_say)['choices'][0]
114
- yield from eval_code(code['message']['function_call'], llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
  else:
116
- chatbot.append(["无法调用相关功能", res])
117
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
118
 
 
119
 
 
1
+ """
2
+ Explanation of the Void Terminal Plugin:
3
+
4
+ Please describe in natural language what you want to do.
5
+
6
+ 1. You can open the plugin's dropdown menu to explore various capabilities of this project, and then describe your needs in natural language, for example:
7
+ - "Please call the plugin to translate a PDF paper for me. I just uploaded the paper to the upload area."
8
+ - "Please use the plugin to translate a PDF paper, with the address being https://www.nature.com/articles/s41586-019-1724-z.pdf."
9
+ - "Generate an image with blooming flowers and lush green grass using the plugin."
10
+ - "Translate the README using the plugin. The GitHub URL is https://github.com/facebookresearch/co-tracker."
11
+ - "Translate an Arxiv paper for me. The Arxiv ID is 1812.10695. Remember to use the plugin and don't do it manually!"
12
+ - "I don't like the current interface color. Modify the configuration and change the theme to THEME="High-Contrast"."
13
+ - "Could you please explain the structure of the Transformer network?"
14
+
15
+ 2. If you use keywords like "call the plugin xxx", "modify the configuration xxx", "please", etc., your intention can be recognized more accurately.
16
+
17
+ 3. Your intention can be recognized more accurately when using powerful models like GPT4. This plugin is relatively new, so please feel free to provide feedback on GitHub.
18
+
19
+ 4. Now, if you need to process a file, please upload the file (drag the file to the file upload area) or describe the path to the file.
20
+
21
+ 5. If you don't need to upload a file, you can simply repeat your command again.
22
+ """
23
+ explain_msg = """
24
+ ## 虚空终端插件说明:
25
+
26
+ 1. 请用**自然语言**描述您需要做什么。例如:
27
+ - 「请调用插件,为我翻译PDF论文,论文我刚刚放到上传区了」
28
+ - 「请调用插件翻译PDF论文,地址为https://aaa/bbb/ccc.pdf」
29
+ - 「把Arxiv论文翻译成中文PDF,arxiv论文的ID是1812.10695,记得用插件!」
30
+ - 「生成一张图片,图中鲜花怒放,绿草如茵,用插件实现」
31
+ - 「用插件翻译README,Github网址是https://github.com/facebookresearch/co-tracker」
32
+ - 「我不喜欢当前的界面颜色,修改配置,把主题THEME更换为THEME="High-Contrast"」
33
+ - 「请问Transformer网络的结构是怎样的?」
34
+
35
+ 2. 您可以打开插件下拉菜单以了解本项目的各种能力。
36
+
37
+ 3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词,您的意图可以被识别的更准确。
38
+
39
+ 4. 建议使用 GPT3.5 或更强的模型,弱模型可能无法理解您的想法。该插件诞生时间不长,欢迎您前往Github反馈问题。
40
+
41
+ 5. 现在,如果需要处理文件,请您上传文件(将文件拖动到文件上传区),或者描述文件所在的路径。
42
+
43
+ 6. 如果不需要上传文件,现在您只需要再次重复一次您的指令即可。
44
+ """
45
+
46
+ from pydantic import BaseModel, Field
47
+ from typing import List
48
  from toolbox import CatchException, update_ui, gen_time_str
49
+ from toolbox import update_ui_lastest_msg, disable_auto_promotion
50
+ from request_llm.bridge_all import predict_no_ui_long_connection
51
+ from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
52
+ from crazy_functions.crazy_utils import input_clipping
53
+ from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
54
+ from crazy_functions.vt_fns.vt_state import VoidTerminalState
55
+ from crazy_functions.vt_fns.vt_modify_config import modify_configuration_hot
56
+ from crazy_functions.vt_fns.vt_modify_config import modify_configuration_reboot
57
+ from crazy_functions.vt_fns.vt_call_plugin import execute_plugin
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
+ class UserIntention(BaseModel):
60
+ user_prompt: str = Field(description="the content of user input", default="")
61
+ intention_type: str = Field(description="the type of user intention, choose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']", default="ExecutePlugin")
62
+ user_provide_file: bool = Field(description="whether the user provides a path to a file", default=False)
63
+ user_provide_url: bool = Field(description="whether the user provides a url", default=False)
64
+
65
+
66
+ def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
68
+ inputs=txt, inputs_show_user=txt,
69
+ llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
70
+ sys_prompt=system_prompt
71
  )
72
+ chatbot[-1] = [txt, gpt_say]
73
+ history.extend([txt, gpt_say])
74
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
75
+ pass
76
+
77
+
78
+ explain_intention_to_user = {
79
+ 'Chat': "聊天对话",
80
+ 'ExecutePlugin': "调用插件",
81
+ 'ModifyConfiguration': "修改配置",
82
+ }
83
 
84
+
85
+ def analyze_intention_with_simple_rules(txt):
86
+ user_intention = UserIntention()
87
+ user_intention.user_prompt = txt
88
+ is_certain = False
89
+
90
+ if '请问' in txt:
91
+ is_certain = True
92
+ user_intention.intention_type = 'Chat'
93
+
94
+ if '用插件' in txt:
95
+ is_certain = True
96
+ user_intention.intention_type = 'ExecutePlugin'
97
+
98
+ if '修改配置' in txt:
99
+ is_certain = True
100
+ user_intention.intention_type = 'ModifyConfiguration'
101
+
102
+ return is_certain, user_intention
103
+
104
+
105
+ @CatchException
106
+ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
107
+ disable_auto_promotion(chatbot=chatbot)
108
+ # 获取当前虚空终端状态
109
+ state = VoidTerminalState.get_state(chatbot)
110
+ appendix_msg = ""
111
+
112
+ # 用简单的关键词检测用户意图
113
+ is_certain, _ = analyze_intention_with_simple_rules(txt)
114
+ if txt.startswith('private_upload/') and len(txt) == 34:
115
+ state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False)
116
+ appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。"
117
+
118
+ if is_certain or (state.has_provided_explaination):
119
+ # 如果意图明确,跳过提示环节
120
+ state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
121
+ state.unlock_plugin(chatbot=chatbot)
122
+ yield from update_ui(chatbot=chatbot, history=history)
123
+ yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
124
+ return
125
  else:
126
+ # 如果意图模糊,提示
127
+ state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
128
+ state.lock_plugin(chatbot=chatbot)
129
+ chatbot.append(("虚空终端状态:", explain_msg+appendix_msg))
130
+ yield from update_ui(chatbot=chatbot, history=history)
131
+ return
132
+
133
+
134
+
135
+ def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
136
+ history = []
137
+ chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
138
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
139
+
140
+ # ⭐ ⭐ ⭐ 分析用户意图
141
+ is_certain, user_intention = analyze_intention_with_simple_rules(txt)
142
+ if not is_certain:
143
+ yield from update_ui_lastest_msg(
144
+ lastmsg=f"正在执行任务: {txt}\n\n分析用户意图中", chatbot=chatbot, history=history, delay=0)
145
+ gpt_json_io = GptJsonIO(UserIntention)
146
+ rf_req = "\nchoose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']"
147
+ inputs = "Analyze the intention of the user according to following user input: \n\n" + \
148
+ ">> " + (txt+rf_req).rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
149
+ run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
150
+ inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
151
+ analyze_res = run_gpt_fn(inputs, "")
152
+ try:
153
+ user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
154
+ lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
155
+ except JsonStringError as e:
156
+ yield from update_ui_lastest_msg(
157
+ lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0)
158
+ return
159
+ else:
160
+ pass
161
+
162
+ yield from update_ui_lastest_msg(
163
+ lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
164
+ chatbot=chatbot, history=history, delay=0)
165
+
166
+ # 用户意图: 修改本项目的配置
167
+ if user_intention.intention_type == 'ModifyConfiguration':
168
+ yield from modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
169
+
170
+ # 用户意图: 调度插件
171
+ if user_intention.intention_type == 'ExecutePlugin':
172
+ yield from execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
173
+
174
+ # 用户意图: 聊天
175
+ if user_intention.intention_type == 'Chat':
176
+ yield from chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
177
 
178
+ return
179
 
crazy_functions/语音助手.py CHANGED
@@ -80,9 +80,9 @@ class InterviewAssistant(AliyunASR):
80
  def __init__(self):
81
  self.capture_interval = 0.5 # second
82
  self.stop = False
83
- self.parsed_text = ""
84
- self.parsed_sentence = ""
85
- self.buffered_sentence = ""
86
  self.event_on_result_chg = threading.Event()
87
  self.event_on_entence_end = threading.Event()
88
  self.event_on_commit_question = threading.Event()
@@ -132,7 +132,7 @@ class InterviewAssistant(AliyunASR):
132
  self.plugin_wd.feed()
133
 
134
  if self.event_on_result_chg.is_set():
135
- # update audio decode result
136
  self.event_on_result_chg.clear()
137
  chatbot[-1] = list(chatbot[-1])
138
  chatbot[-1][0] = self.buffered_sentence + self.parsed_text
@@ -144,7 +144,11 @@ class InterviewAssistant(AliyunASR):
144
  # called when a sentence has ended
145
  self.event_on_entence_end.clear()
146
  self.parsed_text = self.parsed_sentence
147
- self.buffered_sentence += self.parsed_sentence
 
 
 
 
148
 
149
  if self.event_on_commit_question.is_set():
150
  # called when a question should be commited
 
80
  def __init__(self):
81
  self.capture_interval = 0.5 # second
82
  self.stop = False
83
+ self.parsed_text = "" # 下个句子中已经说完的部分, 由 test_on_result_chg() 写入
84
+ self.parsed_sentence = "" # 某段话的整个句子,由 test_on_sentence_end() 写入
85
+ self.buffered_sentence = "" #
86
  self.event_on_result_chg = threading.Event()
87
  self.event_on_entence_end = threading.Event()
88
  self.event_on_commit_question = threading.Event()
 
132
  self.plugin_wd.feed()
133
 
134
  if self.event_on_result_chg.is_set():
135
+ # called when some words have finished
136
  self.event_on_result_chg.clear()
137
  chatbot[-1] = list(chatbot[-1])
138
  chatbot[-1][0] = self.buffered_sentence + self.parsed_text
 
144
  # called when a sentence has ended
145
  self.event_on_entence_end.clear()
146
  self.parsed_text = self.parsed_sentence
147
+ self.buffered_sentence += self.parsed_text
148
+ chatbot[-1] = list(chatbot[-1])
149
+ chatbot[-1][0] = self.buffered_sentence
150
+ history = chatbot2history(chatbot)
151
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
152
 
153
  if self.event_on_commit_question.is_set():
154
  # called when a question should be commited
crazy_functions/谷歌检索小助手.py CHANGED
@@ -1,26 +1,75 @@
1
  from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
2
- from toolbox import CatchException, report_execption, write_results_to_file
3
- from toolbox import update_ui
 
 
 
 
 
 
4
 
5
  def get_meta_information(url, chatbot, history):
6
- import requests
7
  import arxiv
8
  import difflib
 
9
  from bs4 import BeautifulSoup
10
  from toolbox import get_conf
 
 
 
11
  proxies, = get_conf('proxies')
12
  headers = {
13
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36',
 
 
 
 
 
14
  }
15
- # 发送 GET 请求
16
- response = requests.get(url, proxies=proxies, headers=headers)
17
 
 
18
  # 解析网页内容
19
  soup = BeautifulSoup(response.text, "html.parser")
20
 
21
  def string_similar(s1, s2):
22
  return difflib.SequenceMatcher(None, s1, s2).quick_ratio()
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  profile = []
25
  # 获取所有文章的标题和作者
26
  for result in soup.select(".gs_ri"):
@@ -31,32 +80,45 @@ def get_meta_information(url, chatbot, history):
31
  except:
32
  citation = 'cited by 0'
33
  abstract = result.select_one(".gs_rs").text.strip() # 摘要在 .gs_rs 中的文本,需要清除首尾空格
 
 
34
  search = arxiv.Search(
35
  query = title,
36
  max_results = 1,
37
  sort_by = arxiv.SortCriterion.Relevance,
38
  )
39
- try:
40
- paper = next(search.results())
41
- if string_similar(title, paper.title) > 0.90: # same paper
42
- abstract = paper.summary.replace('\n', ' ')
43
- is_paper_in_arxiv = True
44
- else: # different paper
45
- abstract = abstract
46
- is_paper_in_arxiv = False
47
- paper = next(search.results())
48
- except:
 
 
 
 
 
 
 
 
 
49
  abstract = abstract
50
  is_paper_in_arxiv = False
51
- print(title)
52
- print(author)
53
- print(citation)
 
 
54
  profile.append({
55
- 'title':title,
56
- 'author':author,
57
- 'citation':citation,
58
- 'abstract':abstract,
59
- 'is_paper_in_arxiv':is_paper_in_arxiv,
60
  })
61
 
62
  chatbot[-1] = [chatbot[-1][0], title + f'\n\n是否在arxiv中(不在arxiv中无法获取完整摘要):{is_paper_in_arxiv}\n\n' + abstract]
@@ -65,6 +127,7 @@ def get_meta_information(url, chatbot, history):
65
 
66
  @CatchException
67
  def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
 
68
  # 基本信息:功能、贡献者
69
  chatbot.append([
70
  "函数插件功能?",
@@ -86,6 +149,9 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
86
  # 清空历史,以免输入溢出
87
  history = []
88
  meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
 
 
 
89
  batchsize = 5
90
  for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):
91
  if len(meta_paper_info_list[:batchsize]) > 0:
@@ -107,6 +173,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
107
  "已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
108
  msg = '正常'
109
  yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
110
- res = write_results_to_file(history)
111
- chatbot.append(("完成了吗?", res));
 
112
  yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
 
1
  from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
2
+ from toolbox import CatchException, report_execption, promote_file_to_downloadzone
3
+ from toolbox import update_ui, update_ui_lastest_msg, disable_auto_promotion, write_history_to_file
4
+ import logging
5
+ import requests
6
+ import time
7
+ import random
8
+
9
+ ENABLE_ALL_VERSION_SEARCH = True
10
 
11
  def get_meta_information(url, chatbot, history):
 
12
  import arxiv
13
  import difflib
14
+ import re
15
  from bs4 import BeautifulSoup
16
  from toolbox import get_conf
17
+ from urllib.parse import urlparse
18
+ session = requests.session()
19
+
20
  proxies, = get_conf('proxies')
21
  headers = {
22
+ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36',
23
+ 'Accept-Encoding': 'gzip, deflate, br',
24
+ 'Accept-Language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7',
25
+ 'Cache-Control':'max-age=0',
26
+ 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
27
+ 'Connection': 'keep-alive'
28
  }
29
+ session.proxies.update(proxies)
30
+ session.headers.update(headers)
31
 
32
+ response = session.get(url)
33
  # 解析网页内容
34
  soup = BeautifulSoup(response.text, "html.parser")
35
 
36
  def string_similar(s1, s2):
37
  return difflib.SequenceMatcher(None, s1, s2).quick_ratio()
38
 
39
+ if ENABLE_ALL_VERSION_SEARCH:
40
+ def search_all_version(url):
41
+ time.sleep(random.randint(1,5)) # 睡一会防止触发google反爬虫
42
+ response = session.get(url)
43
+ soup = BeautifulSoup(response.text, "html.parser")
44
+
45
+ for result in soup.select(".gs_ri"):
46
+ try:
47
+ url = result.select_one(".gs_rt").a['href']
48
+ except:
49
+ continue
50
+ arxiv_id = extract_arxiv_id(url)
51
+ if not arxiv_id:
52
+ continue
53
+ search = arxiv.Search(
54
+ id_list=[arxiv_id],
55
+ max_results=1,
56
+ sort_by=arxiv.SortCriterion.Relevance,
57
+ )
58
+ try: paper = next(search.results())
59
+ except: paper = None
60
+ return paper
61
+
62
+ return None
63
+
64
+ def extract_arxiv_id(url):
65
+ # 返回给定的url解析出的arxiv_id,如url未成功匹配返回None
66
+ pattern = r'arxiv.org/abs/([^/]+)'
67
+ match = re.search(pattern, url)
68
+ if match:
69
+ return match.group(1)
70
+ else:
71
+ return None
72
+
73
  profile = []
74
  # 获取所有文章的标题和作者
75
  for result in soup.select(".gs_ri"):
 
80
  except:
81
  citation = 'cited by 0'
82
  abstract = result.select_one(".gs_rs").text.strip() # 摘要在 .gs_rs 中的文本,需要清除首尾空格
83
+
84
+ # 首先在arxiv上搜索,获取文章摘要
85
  search = arxiv.Search(
86
  query = title,
87
  max_results = 1,
88
  sort_by = arxiv.SortCriterion.Relevance,
89
  )
90
+ try: paper = next(search.results())
91
+ except: paper = None
92
+
93
+ is_match = paper is not None and string_similar(title, paper.title) > 0.90
94
+
95
+ # 如果在Arxiv上匹配失败,检索文章的历史版本的题目
96
+ if not is_match and ENABLE_ALL_VERSION_SEARCH:
97
+ other_versions_page_url = [tag['href'] for tag in result.select_one('.gs_flb').select('.gs_nph') if 'cluster' in tag['href']]
98
+ if len(other_versions_page_url) > 0:
99
+ other_versions_page_url = other_versions_page_url[0]
100
+ paper = search_all_version('http://' + urlparse(url).netloc + other_versions_page_url)
101
+ is_match = paper is not None and string_similar(title, paper.title) > 0.90
102
+
103
+ if is_match:
104
+ # same paper
105
+ abstract = paper.summary.replace('\n', ' ')
106
+ is_paper_in_arxiv = True
107
+ else:
108
+ # different paper
109
  abstract = abstract
110
  is_paper_in_arxiv = False
111
+
112
+ logging.info('[title]:' + title)
113
+ logging.info('[author]:' + author)
114
+ logging.info('[citation]:' + citation)
115
+
116
  profile.append({
117
+ 'title': title,
118
+ 'author': author,
119
+ 'citation': citation,
120
+ 'abstract': abstract,
121
+ 'is_paper_in_arxiv': is_paper_in_arxiv,
122
  })
123
 
124
  chatbot[-1] = [chatbot[-1][0], title + f'\n\n是否在arxiv中(不在arxiv中无法获取完整摘要):{is_paper_in_arxiv}\n\n' + abstract]
 
127
 
128
  @CatchException
129
  def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
130
+ disable_auto_promotion(chatbot=chatbot)
131
  # 基本信息:功能、贡献者
132
  chatbot.append([
133
  "函数插件功能?",
 
149
  # 清空历史,以免输入溢出
150
  history = []
151
  meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
152
+ if len(meta_paper_info_list) == 0:
153
+ yield from update_ui_lastest_msg(lastmsg='获取文献失败,可能触发了google反爬虫机制。',chatbot=chatbot, history=history, delay=0)
154
+ return
155
  batchsize = 5
156
  for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):
157
  if len(meta_paper_info_list[:batchsize]) > 0:
 
173
  "已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
174
  msg = '正常'
175
  yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
176
+ path = write_history_to_file(history)
177
+ promote_file_to_downloadzone(path, chatbot=chatbot)
178
+ chatbot.append(("完成了吗?", path));
179
  yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
docker-compose.yml CHANGED
@@ -1,7 +1,7 @@
1
  #【请修改完参数后,删除此行】请在以下方案中选择一种,然后删除其他的方案,最后docker-compose up运行 | Please choose from one of these options below, delete other options as well as This Line
2
 
3
  ## ===================================================
4
- ## 【方案一】 如果不需要运行本地模型(仅chatgpt,newbing类远程服务)
5
  ## ===================================================
6
  version: '3'
7
  services:
@@ -13,7 +13,7 @@ services:
13
  USE_PROXY: ' True '
14
  proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
15
  LLM_MODEL: ' gpt-3.5-turbo '
16
- AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "newbing"] '
17
  WEB_PORT: ' 22303 '
18
  ADD_WAIFU: ' True '
19
  # THEME: ' Chuanhu-Small-and-Beautiful '
 
1
  #【请修改完参数后,删除此行】请在以下方案中选择一种,然后删除其他的方案,最后docker-compose up运行 | Please choose from one of these options below, delete other options as well as This Line
2
 
3
  ## ===================================================
4
+ ## 【方案一】 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
5
  ## ===================================================
6
  version: '3'
7
  services:
 
13
  USE_PROXY: ' True '
14
  proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
15
  LLM_MODEL: ' gpt-3.5-turbo '
16
+ AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "sparkv2", "qianfan"] '
17
  WEB_PORT: ' 22303 '
18
  ADD_WAIFU: ' True '
19
  # THEME: ' Chuanhu-Small-and-Beautiful '
docs/Dockerfile+ChatGLM CHANGED
@@ -1,62 +1,2 @@
1
- # How to build | 如何构建: docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
2
- # How to run | (1) 我想直接一键运行(选择0号GPU): docker run --rm -it --net=host --gpus \"device=0\" gpt-academic
3
- # How to run | (2) 我想运行之前进容器做一些调整(选择1号GPU): docker run --rm -it --net=host --gpus \"device=1\" gpt-academic bash
4
-
5
- # 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
6
- FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
7
- ARG useProxyNetwork=''
8
- RUN apt-get update
9
- RUN apt-get install -y curl proxychains curl
10
- RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
11
 
12
- # 配置代理网络(构建Docker镜像时使用)
13
- # # comment out below if you do not need proxy network | 如果不需要翻墙 - 从此行向下删除
14
- RUN $useProxyNetwork curl cip.cc
15
- RUN sed -i '$ d' /etc/proxychains.conf
16
- RUN sed -i '$ d' /etc/proxychains.conf
17
- # 在这里填写主机的代理协议(用于从github拉取代码)
18
- RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
19
- ARG useProxyNetwork=proxychains
20
- # # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
21
-
22
-
23
- # use python3 as the system default python
24
- RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
25
- # 下载pytorch
26
- RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
27
- # 下载分支
28
- WORKDIR /gpt
29
- RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
30
- WORKDIR /gpt/gpt_academic
31
- RUN $useProxyNetwork python3 -m pip install -r requirements.txt
32
- RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
33
- RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
34
-
35
- # 预热CHATGLM参数(非必要 可选步骤)
36
- RUN echo ' \n\
37
- from transformers import AutoModel, AutoTokenizer \n\
38
- chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) \n\
39
- chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).float() ' >> warm_up_chatglm.py
40
- RUN python3 -u warm_up_chatglm.py
41
-
42
- # 禁用缓存,确保更新代码
43
- ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
44
- RUN $useProxyNetwork git pull
45
-
46
- # 预热Tiktoken模块
47
- RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
48
-
49
- # 为chatgpt-academic配置代理和API-KEY (非必要 可选步骤)
50
- # 可同时填写多个API-KEY,支持openai的key和api2d的key共存,用英文逗号分割,例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
51
- # LLM_MODEL 是选择初始的模型
52
- # LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备,可选 cpu 和 cuda
53
- # [说明: 以下内容与`config.py`一一对应,请查阅config.py来完成一下配置的填写]
54
- RUN echo ' \n\
55
- API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
56
- USE_PROXY = True \n\
57
- LLM_MODEL = "chatglm" \n\
58
- LOCAL_MODEL_DEVICE = "cuda" \n\
59
- proxies = { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } ' >> config_private.py
60
-
61
- # 启动
62
- CMD ["python3", "-u", "main.py"]
 
1
+ # Dockerfile不再维护,请前往docs/GithubAction+ChatGLM+Moss
 
 
 
 
 
 
 
 
 
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/Dockerfile+JittorLLM CHANGED
@@ -1,59 +1 @@
1
- # How to build | 如何构建: docker build -t gpt-academic-jittor --network=host -f Dockerfile+ChatGLM .
2
- # How to run | (1) 我想直接一键运行(选择0号GPU): docker run --rm -it --net=host --gpus \"device=0\" gpt-academic-jittor bash
3
- # How to run | (2) 我想运行之前进容器做一些调整(选择1号GPU): docker run --rm -it --net=host --gpus \"device=1\" gpt-academic-jittor bash
4
-
5
- # 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
6
- FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
7
- ARG useProxyNetwork=''
8
- RUN apt-get update
9
- RUN apt-get install -y curl proxychains curl g++
10
- RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
11
-
12
- # 配置代理网络(构建Docker镜像时使用)
13
- # # comment out below if you do not need proxy network | 如果不需要翻墙 - 从此行向下删除
14
- RUN $useProxyNetwork curl cip.cc
15
- RUN sed -i '$ d' /etc/proxychains.conf
16
- RUN sed -i '$ d' /etc/proxychains.conf
17
- # 在这里填写主机的代理协议(用于从github拉取代码)
18
- RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
19
- ARG useProxyNetwork=proxychains
20
- # # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
21
-
22
-
23
- # use python3 as the system default python
24
- RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
25
- # 下载pytorch
26
- RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
27
- # 下载分支
28
- WORKDIR /gpt
29
- RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
30
- WORKDIR /gpt/gpt_academic
31
- RUN $useProxyNetwork python3 -m pip install -r requirements.txt
32
- RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
33
- RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
34
- RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I
35
-
36
- # 下载JittorLLMs
37
- RUN $useProxyNetwork git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llm/jittorllms
38
-
39
- # 禁用缓存,确保更新代码
40
- ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
41
- RUN $useProxyNetwork git pull
42
-
43
- # 预热Tiktoken模块
44
- RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
45
-
46
- # 为chatgpt-academic配置代理和API-KEY (非必要 可选步骤)
47
- # 可同时填写多个API-KEY,支持openai的key和api2d的key共存,用英文逗号分割,例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
48
- # LLM_MODEL 是选择初始的模型
49
- # LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备,可选 cpu 和 cuda
50
- # [说明: 以下内容与`config.py`一一对应,请查阅config.py来完成一下配置的填写]
51
- RUN echo ' \n\
52
- API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
53
- USE_PROXY = True \n\
54
- LLM_MODEL = "chatglm" \n\
55
- LOCAL_MODEL_DEVICE = "cuda" \n\
56
- proxies = { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } ' >> config_private.py
57
-
58
- # 启动
59
- CMD ["python3", "-u", "main.py"]
 
1
+ # Dockerfile不再维护,请前往docs/GithubAction+JittorLLMs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/Dockerfile+NoLocal+Latex CHANGED
@@ -1,27 +1 @@
1
- # 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
2
- # - 1 修改 `config.py`
3
- # - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/Dockerfile+NoLocal+Latex .
4
- # - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
5
-
6
- FROM fuqingxu/python311_texlive_ctex:latest
7
-
8
- # 指定路径
9
- WORKDIR /gpt
10
-
11
- ARG useProxyNetwork=''
12
-
13
- RUN $useProxyNetwork pip3 install gradio openai numpy arxiv rich -i https://pypi.douban.com/simple/
14
- RUN $useProxyNetwork pip3 install colorama Markdown pygments pymupdf -i https://pypi.douban.com/simple/
15
-
16
- # 装载项目文件
17
- COPY . .
18
-
19
-
20
- # 安装依赖
21
- RUN $useProxyNetwork pip3 install -r requirements.txt -i https://pypi.douban.com/simple/
22
-
23
- # 可选步骤,用于预热模块
24
- RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
25
-
26
- # 启动
27
- CMD ["python3", "-u", "main.py"]
 
1
+ # 此Dockerfile不再维护,请前往docs/GithubAction+NoLocal+Latex
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docs/GithubAction+AllCapacity ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacity --network=host --build-arg http_proxy=http://localhost:10881 --build-arg https_proxy=http://localhost:10881 .
2
+
3
+ # 从NVIDIA源,从而支持显卡(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
4
+ FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
5
+
6
+ # use python3 as the system default python
7
+ WORKDIR /gpt
8
+ RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
9
+ # 下载pytorch
10
+ RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
11
+ # 准备pip依赖
12
+ RUN python3 -m pip install openai numpy arxiv rich
13
+ RUN python3 -m pip install colorama Markdown pygments pymupdf
14
+ RUN python3 -m pip install python-docx moviepy pdfminer
15
+ RUN python3 -m pip install zh_langchain==0.2.1
16
+ RUN python3 -m pip install nougat-ocr
17
+ RUN python3 -m pip install rarfile py7zr
18
+ RUN python3 -m pip install aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
19
+ # 下载分支
20
+ WORKDIR /gpt
21
+ RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
22
+ WORKDIR /gpt/gpt_academic
23
+ RUN git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss
24
+
25
+ RUN python3 -m pip install -r requirements.txt
26
+ RUN python3 -m pip install -r request_llm/requirements_moss.txt
27
+ RUN python3 -m pip install -r request_llm/requirements_qwen.txt
28
+ RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
29
+ RUN python3 -m pip install -r request_llm/requirements_newbing.txt
30
+
31
+
32
+
33
+ # 预热Tiktoken模块
34
+ RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
35
+
36
+ # 启动
37
+ CMD ["python3", "-u", "main.py"]
docs/GithubAction+ChatGLM+Moss CHANGED
@@ -1,7 +1,6 @@
1
 
2
  # 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
3
  FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
4
- ARG useProxyNetwork=''
5
  RUN apt-get update
6
  RUN apt-get install -y curl proxychains curl gcc
7
  RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
 
1
 
2
  # 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
3
  FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
 
4
  RUN apt-get update
5
  RUN apt-get install -y curl proxychains curl gcc
6
  RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
docs/GithubAction+NoLocal+Latex CHANGED
@@ -1,6 +1,6 @@
1
  # 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
2
  # - 1 修改 `config.py`
3
- # - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/Dockerfile+NoLocal+Latex .
4
  # - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
5
 
6
  FROM fuqingxu/python311_texlive_ctex:latest
@@ -10,6 +10,10 @@ WORKDIR /gpt
10
 
11
  RUN pip3 install gradio openai numpy arxiv rich
12
  RUN pip3 install colorama Markdown pygments pymupdf
 
 
 
 
13
 
14
  # 装载项目文件
15
  COPY . .
 
1
  # 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
2
  # - 1 修改 `config.py`
3
+ # - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/GithubAction+NoLocal+Latex .
4
  # - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
5
 
6
  FROM fuqingxu/python311_texlive_ctex:latest
 
10
 
11
  RUN pip3 install gradio openai numpy arxiv rich
12
  RUN pip3 install colorama Markdown pygments pymupdf
13
+ RUN pip3 install python-docx moviepy pdfminer
14
+ RUN pip3 install zh_langchain==0.2.1
15
+ RUN pip3 install nougat-ocr
16
+ RUN pip3 install aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
17
 
18
  # 装载项目文件
19
  COPY . .
docs/translate_english.json CHANGED
@@ -2161,5 +2161,292 @@
2161
  "在运行过程中动态地修改配置": "Dynamically modify configurations during runtime",
2162
  "请先把模型切换至gpt-*或者api2d-*": "Please switch the model to gpt-* or api2d-* first",
2163
  "获取简单聊天的句柄": "Get handle of simple chat",
2164
- "获取插件的默认参数": "Get default parameters of plugin"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2165
  }
 
2161
  "在运行过程中动态地修改配置": "Dynamically modify configurations during runtime",
2162
  "请先把模型切换至gpt-*或者api2d-*": "Please switch the model to gpt-* or api2d-* first",
2163
  "获取简单聊天的句柄": "Get handle of simple chat",
2164
+ "获取插件的默认参数": "Get default parameters of plugin",
2165
+ "GROBID服务不可用": "GROBID service is unavailable",
2166
+ "请问": "May I ask",
2167
+ "如果等待时间过长": "If the waiting time is too long",
2168
+ "编程": "programming",
2169
+ "5. 现在": "5. Now",
2170
+ "您不必读这个else分支": "You don't have to read this else branch",
2171
+ "用插件实现": "Implement with plugins",
2172
+ "插件分类默认选项": "Default options for plugin classification",
2173
+ "填写多个可以均衡负载": "Filling in multiple can balance the load",
2174
+ "色彩主题": "Color theme",
2175
+ "可能附带额外依赖 -=-=-=-=-=-=-": "May come with additional dependencies -=-=-=-=-=-=-",
2176
+ "讯飞星火认知大模型": "Xunfei Xinghuo cognitive model",
2177
+ "ParsingLuaProject的所有源文件 | 输入参数为路径": "All source files of ParsingLuaProject | Input parameter is path",
2178
+ "复制以下空间https": "Copy the following space https",
2179
+ "如果意图明确": "If the intention is clear",
2180
+ "如系统是Linux": "If the system is Linux",
2181
+ "├── 语音功能": "├── Voice function",
2182
+ "见Github wiki": "See Github wiki",
2183
+ "⭐ ⭐ ⭐ 立即应用配置": "⭐ ⭐ ⭐ Apply configuration immediately",
2184
+ "现在您只需要再次重复一次您的指令即可": "Now you just need to repeat your command again",
2185
+ "没辙了": "No way",
2186
+ "解析Jupyter Notebook文件 | 输入参数为路径": "Parse Jupyter Notebook file | Input parameter is path",
2187
+ "⭐ ⭐ ⭐ 确认插件参数": "⭐ ⭐ ⭐ Confirm plugin parameters",
2188
+ "找不到合适插件执行该任务": "Cannot find a suitable plugin to perform this task",
2189
+ "接驳VoidTerminal": "Connect to VoidTerminal",
2190
+ "**很好": "**Very good",
2191
+ "对话|编程": "Conversation|Programming",
2192
+ "对话|编程|学术": "Conversation|Programming|Academic",
2193
+ "4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
2194
+ "「请调用插件翻译PDF论文": "Please call the plugin to translate the PDF paper",
2195
+ "3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词": "3. If you use keywords such as 'call plugin xxx', 'modify configuration xxx', 'please', etc.",
2196
+ "以下是一篇学术论文的基本信息": "The following is the basic information of an academic paper",
2197
+ "GROBID服务器地址": "GROBID server address",
2198
+ "修改配置": "Modify configuration",
2199
+ "理解PDF文档的内容并进行回答 | 输入参数为路径": "Understand the content of the PDF document and answer | Input parameter is path",
2200
+ "对于需要高级参数的插件": "For plugins that require advanced parameters",
2201
+ "🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行": "Main process execution 🏃‍♂️🏃‍♂️🏃‍♂️",
2202
+ "没有填写 HUGGINGFACE_ACCESS_TOKEN": "HUGGINGFACE_ACCESS_TOKEN not filled in",
2203
+ "调度插件": "Scheduling plugin",
2204
+ "语言模型": "Language model",
2205
+ "├── ADD_WAIFU 加一个live2d装饰": "├── ADD_WAIFU Add a live2d decoration",
2206
+ "初始化": "Initialization",
2207
+ "选择了不存在的插件": "Selected a non-existent plugin",
2208
+ "修改本项目的配置": "Modify the configuration of this project",
2209
+ "如果输入的文件路径是正确的": "If the input file path is correct",
2210
+ "2. 您可以打开插件下拉菜单以了解本项目的各种能力": "2. You can open the plugin dropdown menu to learn about various capabilities of this project",
2211
+ "VoidTerminal插件说明": "VoidTerminal plugin description",
2212
+ "无法理解您的需求": "Unable to understand your requirements",
2213
+ "默认 AdvancedArgs = False": "Default AdvancedArgs = False",
2214
+ "「请问Transformer网络的结构是怎样的": "What is the structure of the Transformer network?",
2215
+ "比如1812.10695": "For example, 1812.10695",
2216
+ "翻译README或MD": "Translate README or MD",
2217
+ "读取新配置中": "Reading new configuration",
2218
+ "假如偏离了您的要求": "If it deviates from your requirements",
2219
+ "├── THEME 色彩主题": "├── THEME color theme",
2220
+ "如果还找不到": "If still not found",
2221
+ "问": "Ask",
2222
+ "请检查系统字体": "Please check system fonts",
2223
+ "如果错误": "If there is an error",
2224
+ "作为替代": "As an alternative",
2225
+ "ParseJavaProject的所有源文件 | 输入参数为路径": "All source files of ParseJavaProject | Input parameter is path",
2226
+ "比对相同参数时生成的url与自己代码生成的url是否一致": "Check if the generated URL matches the one generated by your code when comparing the same parameters",
2227
+ "清除本地缓存数据": "Clear local cache data",
2228
+ "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL": "Use Google Scholar search assistant to search for results of a specific URL | Input parameter is the URL of Google Scholar search page",
2229
+ "运行方法": "Running method",
2230
+ "您已经上传了文件**": "You have uploaded the file **",
2231
+ "「给爷翻译Arxiv论文": "Translate Arxiv papers for me",
2232
+ "请修改config中的GROBID_URL": "Please modify GROBID_URL in the config",
2233
+ "处理特殊情况": "Handling special cases",
2234
+ "不要自己瞎搞!」": "Don't mess around by yourself!",
2235
+ "LoadConversationHistoryArchive | 输入参数为路径": "LoadConversationHistoryArchive | Input parameter is a path",
2236
+ "| 输入参数是一个问题": "| Input parameter is a question",
2237
+ "├── CHATBOT_HEIGHT 对话窗的高度": "├── CHATBOT_HEIGHT Height of the chat window",
2238
+ "对C": "To C",
2239
+ "默认关闭": "Default closed",
2240
+ "当前进度": "Current progress",
2241
+ "HUGGINGFACE的TOKEN": "HUGGINGFACE's TOKEN",
2242
+ "查找可用插件中": "Searching for available plugins",
2243
+ "下载LLAMA时起作用 https": "Works when downloading LLAMA https",
2244
+ "使用 AK": "Using AK",
2245
+ "正在执行任务": "Executing task",
2246
+ "保存当前的对话 | 不需要输入参数": "Save current conversation | No input parameters required",
2247
+ "对话": "Conversation",
2248
+ "图中鲜花怒放": "Flowers blooming in the picture",
2249
+ "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包": "Batch translate Chinese to English in Markdown files | Input parameter is a path or upload a compressed package",
2250
+ "ParsingCSharpProject的所有源文件 | 输入参数为路径": "ParsingCSharpProject's all source files | Input parameter is a path",
2251
+ "为我翻译PDF论文": "Translate PDF papers for me",
2252
+ "聊天对话": "Chat conversation",
2253
+ "拼接鉴权参数": "Concatenate authentication parameters",
2254
+ "请检查config中的GROBID_URL": "Please check the GROBID_URL in the config",
2255
+ "拼接字符串": "Concatenate strings",
2256
+ "您的意图可以被识别的更准确": "Your intent can be recognized more accurately",
2257
+ "该模型有七个 bin 文件": "The model has seven bin files",
2258
+ "但思路相同": "But the idea is the same",
2259
+ "你需要翻译": "You need to translate",
2260
+ "或者描述文件所在的路径": "Or the path of the description file",
2261
+ "请您上传文件": "Please upload the file",
2262
+ "不常用": "Not commonly used",
2263
+ "尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-": "Experimental plugins that have not been fully tested & plugins that require additional dependencies -=--=-",
2264
+ "⭐ ⭐ ⭐ 选择插件": "⭐ ⭐ ⭐ Select plugin",
2265
+ "当前配置不允许被修改!如需激活本功能": "The current configuration does not allow modification! To activate this feature",
2266
+ "正在连接GROBID服务": "Connecting to GROBID service",
2267
+ "用户图形界面布局依赖关系示意图": "Diagram of user interface layout dependencies",
2268
+ "是否允许通过自然语言描述修改本页的配置": "Allow modifying the configuration of this page through natural language description",
2269
+ "self.chatbot被序列化": "self.chatbot is serialized",
2270
+ "本地Latex论文精细翻译 | 输入参数是路径": "Locally translate Latex papers with fine-grained translation | Input parameter is the path",
2271
+ "抱歉": "Sorry",
2272
+ "以下这部分是最早加入的最稳定的模型 -=-=-=-=-=-=-": "The following section is the earliest and most stable model added",
2273
+ "「用插件翻译README": "Translate README with plugins",
2274
+ "如果不正确": "If incorrect",
2275
+ "⭐ ⭐ ⭐ 读取可配置项目条目": "⭐ ⭐ ⭐ Read configurable project entries",
2276
+ "开始语言对话 | 没有输入参数": "Start language conversation | No input parameters",
2277
+ "谨慎操作 | 不需要输入参数": "Handle with caution | No input parameters required",
2278
+ "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包": "Correct the entire English Latex project | Input parameter is the path or upload compressed package",
2279
+ "如果需要处理文件": "If file processing is required",
2280
+ "提供图像的内容": "Provide the content of the image",
2281
+ "查看历史上的今天事件 | 不需要输入参数": "View historical events of today | No input parameters required",
2282
+ "这个稍微啰嗦一点": "This is a bit verbose",
2283
+ "多线程解析并翻译此项目的源码 | 不需要输入参数": "Parse and translate the source code of this project in multi-threading | No input parameters required",
2284
+ "此处打印出建立连接时候的url": "Print the URL when establishing the connection here",
2285
+ "精准翻译PDF论文为中文 | 输入参数为路径": "Translate PDF papers accurately into Chinese | Input parameter is the path",
2286
+ "检测到操作错误!当您上传文档之后": "Operation error detected! After you upload the document",
2287
+ "在线大模型配置关联关系示意图": "Online large model configuration relationship diagram",
2288
+ "你的填写的空间名如grobid": "Your filled space name such as grobid",
2289
+ "获取方法": "Get method",
2290
+ "| 输入参数为路径": "| Input parameter is the path",
2291
+ "⭐ ⭐ ⭐ 执行插件": "⭐ ⭐ ⭐ Execute plugin",
2292
+ "├── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置": "├── ALLOW_RESET_CONFIG Whether to allow modifying the configuration of this page through natural language description",
2293
+ "重新页面即可生效": "Refresh the page to take effect",
2294
+ "设为public": "Set as public",
2295
+ "并在此处指定模型路径": "And specify the model path here",
2296
+ "分析用户意图中": "Analyzing user intent",
2297
+ "刷新下拉列表": "Refresh the drop-down list",
2298
+ "失败 当前语言模型": "Failed current language model",
2299
+ "1. 请用**自然语言**描述您需要做什么": "1. Please describe what you need to do in **natural language**",
2300
+ "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包": "Translate the full text of Latex projects from Chinese to English | Input parameter is the path or upload a compressed package",
2301
+ "没有配置BAIDU_CLOUD_API_KEY": "No configuration for BAIDU_CLOUD_API_KEY",
2302
+ "设置默认值": "Set default value",
2303
+ "如果太多了会导致gpt无法理解": "If there are too many, it will cause GPT to be unable to understand",
2304
+ "绿草如茵": "Green grass",
2305
+ "├── LAYOUT 窗口布局": "├── LAYOUT window layout",
2306
+ "用户意图理解": "User intent understanding",
2307
+ "生成RFC1123格式的时间戳": "Generate RFC1123 formatted timestamp",
2308
+ "欢迎您前往Github反馈问题": "Welcome to go to Github to provide feedback",
2309
+ "排除已经是按钮的插件": "Exclude plugins that are already buttons",
2310
+ "亦在下拉菜单中显示": "Also displayed in the dropdown menu",
2311
+ "导致无法反序列化": "Causing deserialization failure",
2312
+ "意图=": "Intent =",
2313
+ "章节": "Chapter",
2314
+ "调用插件": "Invoke plugin",
2315
+ "ParseRustProject的所有源文件 | 输入参数为路径": "All source files of ParseRustProject | Input parameter is path",
2316
+ "需要点击“函数插件区”按钮进行处理": "Need to click the 'Function Plugin Area' button for processing",
2317
+ "默认 AsButton = True": "Default AsButton = True",
2318
+ "收到websocket错误的处理": "Handling websocket errors",
2319
+ "用插件": "Use Plugin",
2320
+ "没有选择任何插件组": "No plugin group selected",
2321
+ "答": "Answer",
2322
+ "可修改成本地GROBID服务": "Can modify to local GROBID service",
2323
+ "用户意图": "User intent",
2324
+ "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包": "Polish the full text of English Latex projects | Input parameters are paths or uploaded compressed packages",
2325
+ "「我不喜欢当前的界面颜色": "I don't like the current interface color",
2326
+ "「请调用插件": "Please call the plugin",
2327
+ "VoidTerminal状态": "VoidTerminal status",
2328
+ "新配置": "New configuration",
2329
+ "支持Github链接": "Support Github links",
2330
+ "没有配置BAIDU_CLOUD_SECRET_KEY": "No BAIDU_CLOUD_SECRET_KEY configured",
2331
+ "获取当前VoidTerminal状态": "Get the current VoidTerminal status",
2332
+ "刷新按钮": "Refresh button",
2333
+ "为了防止pickle.dumps": "To prevent pickle.dumps",
2334
+ "放弃治疗": "Give up treatment",
2335
+ "可指定不同的生成长度、top_p等相关超参": "Can specify different generation lengths, top_p and other related hyperparameters",
2336
+ "请将题目和摘要翻译为": "Translate the title and abstract",
2337
+ "通过appid和用户的提问来生成请参数": "Generate request parameters through appid and user's question",
2338
+ "ImageGeneration | 输入参数字符串": "ImageGeneration | Input parameter string",
2339
+ "将文件拖动到文件上传区": "Drag and drop the file to the file upload area",
2340
+ "如果意图模糊": "If the intent is ambiguous",
2341
+ "星火认知大模型": "Spark Cognitive Big Model",
2342
+ "执行中. 删除 gpt_log & private_upload": "Executing. Delete gpt_log & private_upload",
2343
+ "默认 Color = secondary": "Default Color = secondary",
2344
+ "此处也不需要修改": "No modification is needed here",
2345
+ "⭐ ⭐ ⭐ 分析用户意图": "⭐ ⭐ ⭐ Analyze user intent",
2346
+ "再试一次": "Try again",
2347
+ "请写bash命令实现以下功能": "Please write a bash command to implement the following function",
2348
+ "批量SummarizingWordDocuments | 输入参数为路径": "Batch SummarizingWordDocuments | Input parameter is the path",
2349
+ "/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析": "Parse the python file in /Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns",
2350
+ "当我要求你写bash命令时": "When I ask you to write a bash command",
2351
+ "├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框": "├── AUTO_CLEAR_TXT Whether to automatically clear the input box when submitting",
2352
+ "按停止键终止": "Press the stop key to terminate",
2353
+ "文心一言": "Original text",
2354
+ "不能理解您的意图": "Cannot understand your intention",
2355
+ "用简单的关键词检测用户意图": "Detect user intention with simple keywords",
2356
+ "中文": "Chinese",
2357
+ "解析一个C++项目的所有源文件": "Parse all source files of a C++ project",
2358
+ "请求的Prompt为": "Requested prompt is",
2359
+ "参考本demo的时候可取消上方打印的注释": "You can remove the comments above when referring to this demo",
2360
+ "开始接收回复": "Start receiving replies",
2361
+ "接入讯飞星火大模型 https": "Access to Xunfei Xinghuo large model https",
2362
+ "用该压缩包进行反馈": "Use this compressed package for feedback",
2363
+ "翻译Markdown或README": "Translate Markdown or README",
2364
+ "SK 生成鉴权签名": "SK generates authentication signature",
2365
+ "插件参数": "Plugin parameters",
2366
+ "需要访问中文Bing": "Need to access Chinese Bing",
2367
+ "ParseFrontendProject的所有源文件": "Parse all source files of ParseFrontendProject",
2368
+ "现在将执行效果稍差的旧版代码": "Now execute the older version code with slightly worse performance",
2369
+ "您需要明确说明并在指令中提到它": "You need to specify and mention it in the command",
2370
+ "请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件": "Please set ALLOW_RESET_CONFIG=True in config.py and restart the software",
2371
+ "按照自然语言描述生成一个动画 | 输入参数是一段话": "Generate an animation based on natural language description | Input parameter is a sentence",
2372
+ "你的hf用户名如qingxu98": "Your hf username is qingxu98",
2373
+ "Arixv论文精细翻译 | 输入参数arxiv论文的ID": "Fine translation of Arixv paper | Input parameter is the ID of arxiv paper",
2374
+ "无法获取 abstract": "Unable to retrieve abstract",
2375
+ "尽可能地仅用一行命令解决我的要求": "Try to solve my request using only one command",
2376
+ "提取插件参数": "Extract plugin parameters",
2377
+ "配置修改完成": "Configuration modification completed",
2378
+ "正在修改配置中": "Modifying configuration",
2379
+ "ParsePythonProject的所有源文件": "All source files of ParsePythonProject",
2380
+ "请求错误": "Request error",
2381
+ "精准翻译PDF论文": "Accurate translation of PDF paper",
2382
+ "无法获取 authors": "Unable to retrieve authors",
2383
+ "该插件诞生时间不长": "This plugin has not been around for long",
2384
+ "返回项目根路径": "Return project root path",
2385
+ "BatchSummarizePDFDocuments的内容 | 输入参数为路径": "Content of BatchSummarizePDFDocuments | Input parameter is a path",
2386
+ "百度千帆": "Baidu Qianfan",
2387
+ "解析一个C++项目的所有头文件": "Parse all header files of a C++ project",
2388
+ "现在请您描述您的需求": "Now please describe your requirements",
2389
+ "该功能具有一定的危险性": "This feature has a certain level of danger",
2390
+ "收到websocket关闭的处理": "Processing when receiving websocket closure",
2391
+ "读取Tex论文并写摘要 | 输入参数为路径": "Read Tex paper and write abstract | Input parameter is the path",
2392
+ "地址为https": "The address is https",
2393
+ "限制最多前10个配置项": "Limit up to 10 configuration items",
2394
+ "6. 如果不需要上传文件": "6. If file upload is not needed",
2395
+ "默认 Group = 对话": "Default Group = Conversation",
2396
+ "五秒后即将重启!若出现报错请无视即可": "Restarting in five seconds! Please ignore if there is an error",
2397
+ "收到websocket连接建立的处理": "Processing when receiving websocket connection establishment",
2398
+ "批量生成函数的注释 | 输入参数为路径": "Batch generate function comments | Input parameter is the path",
2399
+ "聊天": "Chat",
2400
+ "但您可以尝试再试一次": "But you can try again",
2401
+ "千帆大模型平台": "Qianfan Big Model Platform",
2402
+ "直接运行 python tests/test_plugins.py": "Run python tests/test_plugins.py directly",
2403
+ "或是None": "Or None",
2404
+ "进行hmac-sha256进行加密": "Perform encryption using hmac-sha256",
2405
+ "批量总结音频或视频 | 输入参数为路径": "Batch summarize audio or video | Input parameter is path",
2406
+ "插件在线服务配置依赖关系示意图": "Plugin online service configuration dependency diagram",
2407
+ "开始初始化模型": "Start initializing model",
2408
+ "弱模型可能无法理解您的想法": "Weak model may not understand your ideas",
2409
+ "解除大小写限制": "Remove case sensitivity restriction",
2410
+ "跳过提示环节": "Skip prompt section",
2411
+ "接入一些逆向工程https": "Access some reverse engineering https",
2412
+ "执行完成": "Execution completed",
2413
+ "如果需要配置": "If configuration is needed",
2414
+ "此处不修改;如果使用本地或无地域限制的大模型时": "Do not modify here; if using local or region-unrestricted large models",
2415
+ "你是一个Linux大师级用户": "You are a Linux master-level user",
2416
+ "arxiv论文的ID是1812.10695": "The ID of the arxiv paper is 1812.10695",
2417
+ "而不是点击“提交”按钮": "Instead of clicking the 'Submit' button",
2418
+ "解析一个Go项目的所有源文件 | 输入参数为路径": "Parse all source files of a Go project | Input parameter is path",
2419
+ "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包": "Polish the entire text of a Chinese Latex project | Input parameter is path or upload compressed package",
2420
+ "「生成一张图片": "Generate an image",
2421
+ "将Markdown或README翻译为中文 | 输入参数为路径或URL": "Translate Markdown or README to Chinese | Input parameters are path or URL",
2422
+ "训练时间": "Training time",
2423
+ "将请求的鉴权参数组合为字典": "Combine the requested authentication parameters into a dictionary",
2424
+ "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包": "Translate the entire text of Latex project from English to Chinese | Input parameters are path or uploaded compressed package",
2425
+ "内容如下": "The content is as follows",
2426
+ "用于高质量地读取PDF文档": "Used for high-quality reading of PDF documents",
2427
+ "上下文太长导致 token 溢出": "The context is too long, causing token overflow",
2428
+ "├── DARK_MODE 暗色模式 / 亮色模式": "├── DARK_MODE Dark mode / Light mode",
2429
+ "语言模型回复为": "The language model replies as",
2430
+ "from crazy_functions.chatglm微调工具 import 微调数据集生成": "from crazy_functions.chatglm fine-tuning tool import fine-tuning dataset generation",
2431
+ "为您选择了插件": "Selected plugin for you",
2432
+ "无法获取 title": "Unable to get title",
2433
+ "收到websocket消息的处理": "Processing of received websocket messages",
2434
+ "2023年": "2023",
2435
+ "清除所有缓存文件": "Clear all cache files",
2436
+ "├── PDF文档精准解析": "├── Accurate parsing of PDF documents",
2437
+ "论文我刚刚放到上传区了": "I just put the paper in the upload area",
2438
+ "生成url": "Generate URL",
2439
+ "以下部分是新加入的模型": "The following section is the newly added model",
2440
+ "学术": "Academic",
2441
+ "├── DEFAULT_FN_GROUPS 插件分类默认选项": "├── DEFAULT_FN_GROUPS Plugin classification default options",
2442
+ "不推荐使用": "Not recommended for use",
2443
+ "正在同时咨询": "Consulting simultaneously",
2444
+ "将Markdown翻译为中文 | 输入参数为路径或URL": "Translate Markdown to Chinese | Input parameters are path or URL",
2445
+ "Github网址是https": "The Github URL is https",
2446
+ "试着加上.tex后缀试试": "Try adding the .tex suffix",
2447
+ "对项目中的各个插件进行测试": "Test each plugin in the project",
2448
+ "插件说明": "Plugin description",
2449
+ "├── CODE_HIGHLIGHT 代码高亮": "├── CODE_HIGHLIGHT Code highlighting",
2450
+ "记得用插件": "Remember to use the plugin",
2451
+ "谨慎操作": "Handle with caution"
2452
  }
docs/translate_std.json CHANGED
@@ -83,5 +83,10 @@
83
  "图片生成": "ImageGeneration",
84
  "动画生成": "AnimationGeneration",
85
  "语音助手": "VoiceAssistant",
86
- "启动微调": "StartFineTuning"
 
 
 
 
 
87
  }
 
83
  "图片生成": "ImageGeneration",
84
  "动画生成": "AnimationGeneration",
85
  "语音助手": "VoiceAssistant",
86
+ "启动微调": "StartFineTuning",
87
+ "清除缓存": "ClearCache",
88
+ "辅助功能": "Accessibility",
89
+ "虚空终端": "VoidTerminal",
90
+ "解析PDF_基于GROBID": "ParsePDF_BasedOnGROBID",
91
+ "虚空终端主路由": "VoidTerminalMainRoute"
92
  }
multi_language.py CHANGED
@@ -478,6 +478,8 @@ def step_2_core_key_translate():
478
  up = trans_json(need_translate, language=LANG, special=False)
479
  map_to_json(up, language=LANG)
480
  cached_translation = read_map_from_json(language=LANG)
 
 
481
  cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
482
 
483
  # ===============================================
 
478
  up = trans_json(need_translate, language=LANG, special=False)
479
  map_to_json(up, language=LANG)
480
  cached_translation = read_map_from_json(language=LANG)
481
+ LANG_STD = 'std'
482
+ cached_translation.update(read_map_from_json(language=LANG_STD))
483
  cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
484
 
485
  # ===============================================
request_llm/bridge_all.py CHANGED
@@ -398,6 +398,22 @@ if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
398
  })
399
  except:
400
  print(trimmed_format_exc())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
401
  if "llama2" in AVAIL_LLM_MODELS: # llama2
402
  try:
403
  from .bridge_llama2 import predict_no_ui_long_connection as llama2_noui
 
398
  })
399
  except:
400
  print(trimmed_format_exc())
401
+ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
402
+ try:
403
+ from .bridge_spark import predict_no_ui_long_connection as spark_noui
404
+ from .bridge_spark import predict as spark_ui
405
+ model_info.update({
406
+ "sparkv2": {
407
+ "fn_with_ui": spark_ui,
408
+ "fn_without_ui": spark_noui,
409
+ "endpoint": None,
410
+ "max_token": 4096,
411
+ "tokenizer": tokenizer_gpt35,
412
+ "token_cnt": get_token_num_gpt35,
413
+ }
414
+ })
415
+ except:
416
+ print(trimmed_format_exc())
417
  if "llama2" in AVAIL_LLM_MODELS: # llama2
418
  try:
419
  from .bridge_llama2 import predict_no_ui_long_connection as llama2_noui
request_llm/bridge_chatglmft.py CHANGED
@@ -63,9 +63,9 @@ class GetGLMFTHandle(Process):
63
  # if not os.path.exists(conf): raise RuntimeError('找不到微调模型信息')
64
  # with open(conf, 'r', encoding='utf8') as f:
65
  # model_args = json.loads(f.read())
66
- ChatGLM_PTUNING_CHECKPOINT, = get_conf('ChatGLM_PTUNING_CHECKPOINT')
67
- assert os.path.exists(ChatGLM_PTUNING_CHECKPOINT), "找不到微调模型检查点"
68
- conf = os.path.join(ChatGLM_PTUNING_CHECKPOINT, "config.json")
69
  with open(conf, 'r', encoding='utf8') as f:
70
  model_args = json.loads(f.read())
71
  if 'model_name_or_path' not in model_args:
@@ -78,9 +78,9 @@ class GetGLMFTHandle(Process):
78
  config.pre_seq_len = model_args['pre_seq_len']
79
  config.prefix_projection = model_args['prefix_projection']
80
 
81
- print(f"Loading prefix_encoder weight from {ChatGLM_PTUNING_CHECKPOINT}")
82
  model = AutoModel.from_pretrained(model_args['model_name_or_path'], config=config, trust_remote_code=True)
83
- prefix_state_dict = torch.load(os.path.join(ChatGLM_PTUNING_CHECKPOINT, "pytorch_model.bin"))
84
  new_prefix_state_dict = {}
85
  for k, v in prefix_state_dict.items():
86
  if k.startswith("transformer.prefix_encoder."):
 
63
  # if not os.path.exists(conf): raise RuntimeError('找不到微调模型信息')
64
  # with open(conf, 'r', encoding='utf8') as f:
65
  # model_args = json.loads(f.read())
66
+ CHATGLM_PTUNING_CHECKPOINT, = get_conf('CHATGLM_PTUNING_CHECKPOINT')
67
+ assert os.path.exists(CHATGLM_PTUNING_CHECKPOINT), "找不到微调模型检查点"
68
+ conf = os.path.join(CHATGLM_PTUNING_CHECKPOINT, "config.json")
69
  with open(conf, 'r', encoding='utf8') as f:
70
  model_args = json.loads(f.read())
71
  if 'model_name_or_path' not in model_args:
 
78
  config.pre_seq_len = model_args['pre_seq_len']
79
  config.prefix_projection = model_args['prefix_projection']
80
 
81
+ print(f"Loading prefix_encoder weight from {CHATGLM_PTUNING_CHECKPOINT}")
82
  model = AutoModel.from_pretrained(model_args['model_name_or_path'], config=config, trust_remote_code=True)
83
+ prefix_state_dict = torch.load(os.path.join(CHATGLM_PTUNING_CHECKPOINT, "pytorch_model.bin"))
84
  new_prefix_state_dict = {}
85
  for k, v in prefix_state_dict.items():
86
  if k.startswith("transformer.prefix_encoder."):
request_llm/bridge_chatgpt.py CHANGED
@@ -137,6 +137,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
137
  chatbot.append((inputs, ""))
138
  yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
139
 
 
 
 
 
 
 
140
  try:
141
  headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
142
  except RuntimeError as e:
@@ -178,7 +184,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
178
  return
179
 
180
  chunk_decoded = chunk.decode()
181
- if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"choices" not in chunk_decoded):
182
  # 数据流的第一帧不携带content
183
  is_head_of_the_stream = False; continue
184
 
 
137
  chatbot.append((inputs, ""))
138
  yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
139
 
140
+ # check mis-behavior
141
+ if raw_input.startswith('private_upload/') and len(raw_input) == 34:
142
+ chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需要点击“函数插件区”按钮进行处理,而不是点击“提交”按钮。")
143
+ yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
144
+ time.sleep(2)
145
+
146
  try:
147
  headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
148
  except RuntimeError as e:
 
184
  return
185
 
186
  chunk_decoded = chunk.decode()
187
+ if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
188
  # 数据流的第一帧不携带content
189
  is_head_of_the_stream = False; continue
190
 
request_llm/bridge_qianfan.py CHANGED
@@ -49,16 +49,17 @@ def get_access_token():
49
 
50
  def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
51
  conversation_cnt = len(history) // 2
 
52
  messages = [{"role": "user", "content": system_prompt}]
53
  messages.append({"role": "assistant", "content": 'Certainly!'})
54
  if conversation_cnt:
55
  for index in range(0, 2*conversation_cnt, 2):
56
  what_i_have_asked = {}
57
  what_i_have_asked["role"] = "user"
58
- what_i_have_asked["content"] = history[index]
59
  what_gpt_answer = {}
60
  what_gpt_answer["role"] = "assistant"
61
- what_gpt_answer["content"] = history[index+1]
62
  if what_i_have_asked["content"] != "":
63
  if what_gpt_answer["content"] == "": continue
64
  if what_gpt_answer["content"] == timeout_bot_msg: continue
 
49
 
50
  def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
51
  conversation_cnt = len(history) // 2
52
+ if system_prompt == "": system_prompt = "Hello"
53
  messages = [{"role": "user", "content": system_prompt}]
54
  messages.append({"role": "assistant", "content": 'Certainly!'})
55
  if conversation_cnt:
56
  for index in range(0, 2*conversation_cnt, 2):
57
  what_i_have_asked = {}
58
  what_i_have_asked["role"] = "user"
59
+ what_i_have_asked["content"] = history[index] if history[index]!="" else "Hello"
60
  what_gpt_answer = {}
61
  what_gpt_answer["role"] = "assistant"
62
+ what_gpt_answer["content"] = history[index+1] if history[index]!="" else "Hello"
63
  if what_i_have_asked["content"] != "":
64
  if what_gpt_answer["content"] == "": continue
65
  if what_gpt_answer["content"] == timeout_bot_msg: continue
request_llm/bridge_spark.py CHANGED
@@ -2,11 +2,17 @@
2
  import time
3
  import threading
4
  import importlib
5
- from toolbox import update_ui, get_conf
6
  from multiprocessing import Process, Pipe
7
 
8
  model_name = '星火认知大模型'
9
 
 
 
 
 
 
 
10
  def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
11
  """
12
  ⭐多线程方法
@@ -15,6 +21,9 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
15
  watch_dog_patience = 5
16
  response = ""
17
 
 
 
 
18
  from .com_sparkapi import SparkRequestInstance
19
  sri = SparkRequestInstance()
20
  for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
@@ -30,6 +39,11 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
30
  函数的说明请见 request_llm/bridge_all.py
31
  """
32
  chatbot.append((inputs, ""))
 
 
 
 
 
33
 
34
  if additional_fn is not None:
35
  from core_functional import handle_core_functionality
 
2
  import time
3
  import threading
4
  import importlib
5
+ from toolbox import update_ui, get_conf, update_ui_lastest_msg
6
  from multiprocessing import Process, Pipe
7
 
8
  model_name = '星火认知大模型'
9
 
10
+ def validate_key():
11
+ XFYUN_APPID, = get_conf('XFYUN_APPID', )
12
+ if XFYUN_APPID == '00000000' or XFYUN_APPID == '':
13
+ return False
14
+ return True
15
+
16
  def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
17
  """
18
  ⭐多线程方法
 
21
  watch_dog_patience = 5
22
  response = ""
23
 
24
+ if validate_key() is False:
25
+ raise RuntimeError('请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET')
26
+
27
  from .com_sparkapi import SparkRequestInstance
28
  sri = SparkRequestInstance()
29
  for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
 
39
  函数的说明请见 request_llm/bridge_all.py
40
  """
41
  chatbot.append((inputs, ""))
42
+ yield from update_ui(chatbot=chatbot, history=history)
43
+
44
+ if validate_key() is False:
45
+ yield from update_ui_lastest_msg(lastmsg="[Local Message]: 请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET", chatbot=chatbot, history=history, delay=0)
46
+ return
47
 
48
  if additional_fn is not None:
49
  from core_functional import handle_core_functionality
request_llm/com_sparkapi.py CHANGED
@@ -58,11 +58,13 @@ class Ws_Param(object):
58
  class SparkRequestInstance():
59
  def __init__(self):
60
  XFYUN_APPID, XFYUN_API_SECRET, XFYUN_API_KEY = get_conf('XFYUN_APPID', 'XFYUN_API_SECRET', 'XFYUN_API_KEY')
61
-
62
  self.appid = XFYUN_APPID
63
  self.api_secret = XFYUN_API_SECRET
64
  self.api_key = XFYUN_API_KEY
65
  self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
 
 
66
  self.time_to_yield_event = threading.Event()
67
  self.time_to_exit_event = threading.Event()
68
 
@@ -83,7 +85,12 @@ class SparkRequestInstance():
83
 
84
 
85
  def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
86
- wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, self.gpt_url)
 
 
 
 
 
87
  websocket.enableTrace(False)
88
  wsUrl = wsParam.create_url()
89
 
@@ -167,7 +174,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
167
  },
168
  "parameter": {
169
  "chat": {
170
- "domain": "general",
171
  "temperature": llm_kwargs["temperature"],
172
  "random_threshold": 0.5,
173
  "max_tokens": 4096,
 
58
  class SparkRequestInstance():
59
  def __init__(self):
60
  XFYUN_APPID, XFYUN_API_SECRET, XFYUN_API_KEY = get_conf('XFYUN_APPID', 'XFYUN_API_SECRET', 'XFYUN_API_KEY')
61
+ if XFYUN_APPID == '00000000' or XFYUN_APPID == '': raise RuntimeError('请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET')
62
  self.appid = XFYUN_APPID
63
  self.api_secret = XFYUN_API_SECRET
64
  self.api_key = XFYUN_API_KEY
65
  self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
66
+ self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
67
+
68
  self.time_to_yield_event = threading.Event()
69
  self.time_to_exit_event = threading.Event()
70
 
 
85
 
86
 
87
  def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
88
+ if llm_kwargs['llm_model'] == 'sparkv2':
89
+ gpt_url = self.gpt_url_v2
90
+ else:
91
+ gpt_url = self.gpt_url
92
+
93
+ wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
94
  websocket.enableTrace(False)
95
  wsUrl = wsParam.create_url()
96
 
 
174
  },
175
  "parameter": {
176
  "chat": {
177
+ "domain": "generalv2" if llm_kwargs['llm_model'] == 'sparkv2' else "general",
178
  "temperature": llm_kwargs["temperature"],
179
  "random_threshold": 0.5,
180
  "max_tokens": 4096,
requirements.txt CHANGED
@@ -19,4 +19,4 @@ arxiv
19
  rich
20
  pypdf2==2.12.1
21
  websocket-client
22
- scipdf_parser==0.3
 
19
  rich
20
  pypdf2==2.12.1
21
  websocket-client
22
+ scipdf_parser>=0.3
tests/test_plugins.py CHANGED
@@ -9,6 +9,11 @@ validate_path() # 返回项目根路径
9
  from tests.test_utils import plugin_test
10
 
11
  if __name__ == "__main__":
 
 
 
 
 
12
  # plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
13
 
14
  # plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
@@ -19,7 +24,7 @@ if __name__ == "__main__":
19
 
20
  # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown中译英', main_input="README.md")
21
 
22
- plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
23
 
24
  # plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
25
 
 
9
  from tests.test_utils import plugin_test
10
 
11
  if __name__ == "__main__":
12
+ # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
13
+ plugin_test(plugin='crazy_functions.批量翻译PDF文档_NOUGAT->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
14
+
15
+ # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='调用插件,对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析')
16
+
17
  # plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
18
 
19
  # plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
 
24
 
25
  # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown中译英', main_input="README.md")
26
 
27
+ # plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
28
 
29
  # plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
30
 
themes/common.css ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* hide remove all button */
2
+ .remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
3
+ visibility: hidden;
4
+ }
5
+
6
+ /* hide selector border */
7
+ #input-plugin-group .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
8
+ border: 0px;
9
+ box-shadow: none;
10
+ }
11
+
12
+ /* hide selector label */
13
+ #input-plugin-group .svelte-1gfkn6j {
14
+ visibility: hidden;
15
+ }
16
+
17
+
18
+ /* height of the upload box */
19
+ .wrap.svelte-xwlu1w {
20
+ min-height: var(--size-32);
21
+ }
themes/common.js CHANGED
@@ -1,6 +1,6 @@
1
  function ChatBotHeight() {
2
  function update_height(){
3
- var { panel_height_target, chatbot_height, chatbot } = get_elements();
4
  if (panel_height_target!=chatbot_height)
5
  {
6
  var pixelString = panel_height_target.toString() + 'px';
@@ -28,18 +28,24 @@ function ChatBotHeight() {
28
  }, 50); // 每100毫秒执行一次
29
  }
30
 
31
- function get_elements() {
32
  var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
33
  if (!chatbot) {
34
  chatbot = document.querySelector('#gpt-chatbot');
35
  }
36
- const panel1 = document.querySelector('#input-panel');
37
- const panel2 = document.querySelector('#basic-panel');
38
- const panel3 = document.querySelector('#plugin-panel');
39
- const panel4 = document.querySelector('#interact-panel');
40
- const panel5 = document.querySelector('#input-panel2');
41
- const panel_active = document.querySelector('#state-panel');
42
- var panel_height_target = (20-panel_active.offsetHeight) + panel1.offsetHeight + panel2.offsetHeight + panel3.offsetHeight + panel4.offsetHeight + panel5.offsetHeight + 21;
 
 
 
 
 
 
43
  var panel_height_target = parseInt(panel_height_target);
44
  var chatbot_height = chatbot.style.height;
45
  var chatbot_height = parseInt(chatbot_height);
 
1
  function ChatBotHeight() {
2
  function update_height(){
3
+ var { panel_height_target, chatbot_height, chatbot } = get_elements(true);
4
  if (panel_height_target!=chatbot_height)
5
  {
6
  var pixelString = panel_height_target.toString() + 'px';
 
28
  }, 50); // 每100毫秒执行一次
29
  }
30
 
31
+ function get_elements(consider_state_panel=false) {
32
  var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
33
  if (!chatbot) {
34
  chatbot = document.querySelector('#gpt-chatbot');
35
  }
36
+ const panel1 = document.querySelector('#input-panel').getBoundingClientRect();
37
+ const panel2 = document.querySelector('#basic-panel').getBoundingClientRect()
38
+ const panel3 = document.querySelector('#plugin-panel').getBoundingClientRect();
39
+ const panel4 = document.querySelector('#interact-panel').getBoundingClientRect();
40
+ const panel5 = document.querySelector('#input-panel2').getBoundingClientRect();
41
+ const panel_active = document.querySelector('#state-panel').getBoundingClientRect();
42
+ if (consider_state_panel || panel_active.height < 25){
43
+ document.state_panel_height = panel_active.height;
44
+ }
45
+ // 25 是chatbot的label高度, 16 是右侧的gap
46
+ var panel_height_target = panel1.height + panel2.height + panel3.height + panel4.height + panel5.height - 25 + 16*3;
47
+ // 禁止动态的state-panel高度影响
48
+ panel_height_target = panel_height_target + (document.state_panel_height-panel_active.height)
49
  var panel_height_target = parseInt(panel_height_target);
50
  var chatbot_height = chatbot.style.height;
51
  var chatbot_height = parseInt(chatbot_height);
themes/contrast.css ADDED
@@ -0,0 +1,482 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :root {
2
+ --body-text-color: #FFFFFF;
3
+ --link-text-color: #FFFFFF;
4
+ --link-text-color-active: #FFFFFF;
5
+ --link-text-color-hover: #FFFFFF;
6
+ --link-text-color-visited: #FFFFFF;
7
+ --body-text-color-subdued: #FFFFFF;
8
+ --block-info-text-color: #FFFFFF;
9
+ --block-label-text-color: #FFFFFF;
10
+ --block-title-text-color: #FFFFFF;
11
+ --checkbox-label-text-color: #FFFFFF;
12
+ --checkbox-label-text-color-selected: #FFFFFF;
13
+ --error-text-color: #FFFFFF;
14
+ --button-cancel-text-color: #FFFFFF;
15
+ --button-cancel-text-color-hover: #FFFFFF;
16
+ --button-primary-text-color: #FFFFFF;
17
+ --button-primary-text-color-hover: #FFFFFF;
18
+ --button-secondary-text-color: #FFFFFF;
19
+ --button-secondary-text-color-hover: #FFFFFF;
20
+
21
+
22
+ --border-bottom-right-radius: 0px;
23
+ --border-bottom-left-radius: 0px;
24
+ --border-top-right-radius: 0px;
25
+ --border-top-left-radius: 0px;
26
+ --block-radius: 0px;
27
+ --button-large-radius: 0px;
28
+ --button-small-radius: 0px;
29
+ --block-background-fill: #000000;
30
+
31
+ --border-color-accent: #3cff00;
32
+ --border-color-primary: #3cff00;
33
+ --block-border-color: #3cff00;
34
+ --block-label-border-color: #3cff00;
35
+ --block-title-border-color: #3cff00;
36
+ --panel-border-color: #3cff00;
37
+ --checkbox-border-color: #3cff00;
38
+ --checkbox-border-color-focus: #3cff00;
39
+ --checkbox-border-color-hover: #3cff00;
40
+ --checkbox-border-color-selected: #3cff00;
41
+ --checkbox-label-border-color: #3cff00;
42
+ --checkbox-label-border-color-hover: #3cff00;
43
+ --error-border-color: #3cff00;
44
+ --input-border-color: #3cff00;
45
+ --input-border-color-focus: #3cff00;
46
+ --input-border-color-hover: #3cff00;
47
+ --table-border-color: #3cff00;
48
+ --button-cancel-border-color: #3cff00;
49
+ --button-cancel-border-color-hover: #3cff00;
50
+ --button-primary-border-color: #3cff00;
51
+ --button-primary-border-color-hover: #3cff00;
52
+ --button-secondary-border-color: #3cff00;
53
+ --button-secondary-border-color-hover: #3cff00;
54
+
55
+
56
+ --body-background-fill: #000000;
57
+ --background-fill-primary: #000000;
58
+ --background-fill-secondary: #000000;
59
+ --block-background-fill: #000000;
60
+ --block-label-background-fill: #000000;
61
+ --block-title-background-fill: #000000;
62
+ --panel-background-fill: #000000;
63
+ --chatbot-code-background-color: #000000;
64
+ --checkbox-background-color: #000000;
65
+ --checkbox-background-color-focus: #000000;
66
+ --checkbox-background-color-hover: #000000;
67
+ --checkbox-background-color-selected: #000000;
68
+ --checkbox-label-background-fill: #000000;
69
+ --checkbox-label-background-fill-hover: #000000;
70
+ --checkbox-label-background-fill-selected: #000000;
71
+ --error-background-fill: #000000;
72
+ --input-background-fill: #000000;
73
+ --input-background-fill-focus: #000000;
74
+ --input-background-fill-hover: #000000;
75
+ --stat-background-fill: #000000;
76
+ --table-even-background-fill: #000000;
77
+ --table-odd-background-fill: #000000;
78
+ --button-cancel-background-fill: #000000;
79
+ --button-cancel-background-fill-hover: #000000;
80
+ --button-primary-background-fill: #000000;
81
+ --button-primary-background-fill-hover: #000000;
82
+ --button-secondary-background-fill: #000000;
83
+ --button-secondary-background-fill-hover: #000000;
84
+ --color-accent-soft: #000000;
85
+ }
86
+
87
+ .dark {
88
+ --body-text-color: #FFFFFF;
89
+ --link-text-color: #FFFFFF;
90
+ --link-text-color-active: #FFFFFF;
91
+ --link-text-color-hover: #FFFFFF;
92
+ --link-text-color-visited: #FFFFFF;
93
+ --body-text-color-subdued: #FFFFFF;
94
+ --block-info-text-color: #FFFFFF;
95
+ --block-label-text-color: #FFFFFF;
96
+ --block-title-text-color: #FFFFFF;
97
+ --checkbox-label-text-color: #FFFFFF;
98
+ --checkbox-label-text-color-selected: #FFFFFF;
99
+ --error-text-color: #FFFFFF;
100
+ --button-cancel-text-color: #FFFFFF;
101
+ --button-cancel-text-color-hover: #FFFFFF;
102
+ --button-primary-text-color: #FFFFFF;
103
+ --button-primary-text-color-hover: #FFFFFF;
104
+ --button-secondary-text-color: #FFFFFF;
105
+ --button-secondary-text-color-hover: #FFFFFF;
106
+
107
+
108
+
109
+ --border-bottom-right-radius: 0px;
110
+ --border-bottom-left-radius: 0px;
111
+ --border-top-right-radius: 0px;
112
+ --border-top-left-radius: 0px;
113
+ --block-radius: 0px;
114
+ --button-large-radius: 0px;
115
+ --button-small-radius: 0px;
116
+ --block-background-fill: #000000;
117
+
118
+ --border-color-accent: #3cff00;
119
+ --border-color-primary: #3cff00;
120
+ --block-border-color: #3cff00;
121
+ --block-label-border-color: #3cff00;
122
+ --block-title-border-color: #3cff00;
123
+ --panel-border-color: #3cff00;
124
+ --checkbox-border-color: #3cff00;
125
+ --checkbox-border-color-focus: #3cff00;
126
+ --checkbox-border-color-hover: #3cff00;
127
+ --checkbox-border-color-selected: #3cff00;
128
+ --checkbox-label-border-color: #3cff00;
129
+ --checkbox-label-border-color-hover: #3cff00;
130
+ --error-border-color: #3cff00;
131
+ --input-border-color: #3cff00;
132
+ --input-border-color-focus: #3cff00;
133
+ --input-border-color-hover: #3cff00;
134
+ --table-border-color: #3cff00;
135
+ --button-cancel-border-color: #3cff00;
136
+ --button-cancel-border-color-hover: #3cff00;
137
+ --button-primary-border-color: #3cff00;
138
+ --button-primary-border-color-hover: #3cff00;
139
+ --button-secondary-border-color: #3cff00;
140
+ --button-secondary-border-color-hover: #3cff00;
141
+
142
+
143
+ --body-background-fill: #000000;
144
+ --background-fill-primary: #000000;
145
+ --background-fill-secondary: #000000;
146
+ --block-background-fill: #000000;
147
+ --block-label-background-fill: #000000;
148
+ --block-title-background-fill: #000000;
149
+ --panel-background-fill: #000000;
150
+ --chatbot-code-background-color: #000000;
151
+ --checkbox-background-color: #000000;
152
+ --checkbox-background-color-focus: #000000;
153
+ --checkbox-background-color-hover: #000000;
154
+ --checkbox-background-color-selected: #000000;
155
+ --checkbox-label-background-fill: #000000;
156
+ --checkbox-label-background-fill-hover: #000000;
157
+ --checkbox-label-background-fill-selected: #000000;
158
+ --error-background-fill: #000000;
159
+ --input-background-fill: #000000;
160
+ --input-background-fill-focus: #000000;
161
+ --input-background-fill-hover: #000000;
162
+ --stat-background-fill: #000000;
163
+ --table-even-background-fill: #000000;
164
+ --table-odd-background-fill: #000000;
165
+ --button-cancel-background-fill: #000000;
166
+ --button-cancel-background-fill-hover: #000000;
167
+ --button-primary-background-fill: #000000;
168
+ --button-primary-background-fill-hover: #000000;
169
+ --button-secondary-background-fill: #000000;
170
+ --button-secondary-background-fill-hover: #000000;
171
+ --color-accent-soft: #000000;
172
+ }
173
+
174
+
175
+
176
+ .block.svelte-mppz8v {
177
+ border-color: #3cff00;
178
+ }
179
+
180
+ /* 插件下拉菜单 */
181
+ #plugin-panel .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
182
+ box-shadow: var(--input-shadow);
183
+ border: var(--input-border-width) dashed var(--border-color-primary);
184
+ border-radius: 4px;
185
+ }
186
+
187
+ #plugin-panel .dropdown-arrow.svelte-p5edak {
188
+ width: 50px;
189
+ }
190
+ #plugin-panel input.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
191
+ padding-left: 5px;
192
+ }
193
+ .root{
194
+ border-bottom-right-radius: 0px;
195
+ border-bottom-left-radius: 0px;
196
+ border-top-right-radius: 0px;
197
+ border-top-left-radius: 0px;
198
+ }
199
+
200
+ /* 小按钮 */
201
+ .sm.svelte-1ipelgc {
202
+ font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
203
+ --button-small-text-weight: 600;
204
+ --button-small-text-size: 16px;
205
+ border-bottom-right-radius: 0px;
206
+ border-bottom-left-radius: 0px;
207
+ border-top-right-radius: 0px;
208
+ border-top-left-radius: 0px;
209
+ }
210
+
211
+ #plugin-panel .sm.svelte-1ipelgc {
212
+ font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
213
+ --button-small-text-weight: 400;
214
+ --button-small-text-size: 14px;
215
+ border-bottom-right-radius: 0px;
216
+ border-bottom-left-radius: 0px;
217
+ border-top-right-radius: 0px;
218
+ border-top-left-radius: 0px;
219
+ }
220
+
221
+ .wrap-inner.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
222
+ padding: 0%;
223
+ }
224
+
225
+ .markdown-body table {
226
+ margin: 1em 0;
227
+ border-collapse: collapse;
228
+ empty-cells: show;
229
+ }
230
+
231
+ .markdown-body th, .markdown-body td {
232
+ border: 1.2px solid var(--border-color-primary);
233
+ padding: 5px;
234
+ }
235
+
236
+ .markdown-body thead {
237
+ background-color: rgb(0, 0, 0);
238
+ }
239
+
240
+ .markdown-body thead th {
241
+ padding: .5em .2em;
242
+ }
243
+
244
+ .normal_mut_select .svelte-1gfkn6j {
245
+ float: left;
246
+ width: auto;
247
+ line-height: 260% !important;
248
+ }
249
+
250
+ .markdown-body ol, .markdown-body ul {
251
+ padding-inline-start: 2em !important;
252
+ }
253
+
254
+ /* chat box. */
255
+ [class *= "message"] {
256
+ border-radius: var(--radius-xl) !important;
257
+ /* padding: var(--spacing-xl) !important; */
258
+ /* font-size: var(--text-md) !important; */
259
+ /* line-height: var(--line-md) !important; */
260
+ /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
261
+ /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
262
+ }
263
+ [data-testid = "bot"] {
264
+ max-width: 95%;
265
+ /* width: auto !important; */
266
+ border-bottom-left-radius: 0 !important;
267
+ }
268
+ [data-testid = "user"] {
269
+ max-width: 100%;
270
+ /* width: auto !important; */
271
+ border-bottom-right-radius: 0 !important;
272
+ }
273
+
274
+ /* linein code block. */
275
+ .markdown-body code {
276
+ display: inline;
277
+ white-space: break-spaces;
278
+ border-radius: 6px;
279
+ margin: 0 2px 0 2px;
280
+ padding: .2em .4em .1em .4em;
281
+ background-color: rgba(0, 0, 0, 0.95);
282
+ color: #c9d1d9;
283
+ }
284
+
285
+ .dark .markdown-body code {
286
+ display: inline;
287
+ white-space: break-spaces;
288
+ border-radius: 6px;
289
+ margin: 0 2px 0 2px;
290
+ padding: .2em .4em .1em .4em;
291
+ background-color: rgba(0,0,0,0.2);
292
+ }
293
+
294
+ /* code block css */
295
+ .markdown-body pre code {
296
+ display: block;
297
+ overflow: auto;
298
+ white-space: pre;
299
+ background-color: rgba(0, 0, 0, 0.95);
300
+ border-radius: 10px;
301
+ padding: 1em;
302
+ margin: 1em 2em 1em 0.5em;
303
+ }
304
+
305
+ .dark .markdown-body pre code {
306
+ display: block;
307
+ overflow: auto;
308
+ white-space: pre;
309
+ background-color: rgba(0,0,0,0.2);
310
+ border-radius: 10px;
311
+ padding: 1em;
312
+ margin: 1em 2em 1em 0.5em;
313
+ }
314
+
315
+ /* .mic-wrap.svelte-1thnwz {
316
+
317
+ } */
318
+ .block.svelte-mppz8v > .mic-wrap.svelte-1thnwz{
319
+ justify-content: center;
320
+ display: flex;
321
+ padding: 0;
322
+
323
+ }
324
+
325
+ .codehilite .hll { background-color: #6e7681 }
326
+ .codehilite .c { color: #8b949e; font-style: italic } /* Comment */
327
+ .codehilite .err { color: #f85149 } /* Error */
328
+ .codehilite .esc { color: #c9d1d9 } /* Escape */
329
+ .codehilite .g { color: #c9d1d9 } /* Generic */
330
+ .codehilite .k { color: #ff7b72 } /* Keyword */
331
+ .codehilite .l { color: #a5d6ff } /* Literal */
332
+ .codehilite .n { color: #c9d1d9 } /* Name */
333
+ .codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */
334
+ .codehilite .x { color: #c9d1d9 } /* Other */
335
+ .codehilite .p { color: #c9d1d9 } /* Punctuation */
336
+ .codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */
337
+ .codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */
338
+ .codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */
339
+ .codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */
340
+ .codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */
341
+ .codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */
342
+ .codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */
343
+ .codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */
344
+ .codehilite .gr { color: #ffa198 } /* Generic.Error */
345
+ .codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */
346
+ .codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */
347
+ .codehilite .go { color: #8b949e } /* Generic.Output */
348
+ .codehilite .gp { color: #8b949e } /* Generic.Prompt */
349
+ .codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */
350
+ .codehilite .gu { color: #79c0ff } /* Generic.Subheading */
351
+ .codehilite .gt { color: #ff7b72 } /* Generic.Traceback */
352
+ .codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */
353
+ .codehilite .kc { color: #79c0ff } /* Keyword.Constant */
354
+ .codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */
355
+ .codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */
356
+ .codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */
357
+ .codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */
358
+ .codehilite .kt { color: #ff7b72 } /* Keyword.Type */
359
+ .codehilite .ld { color: #79c0ff } /* Literal.Date */
360
+ .codehilite .m { color: #a5d6ff } /* Literal.Number */
361
+ .codehilite .s { color: #a5d6ff } /* Literal.String */
362
+ .codehilite .na { color: #c9d1d9 } /* Name.Attribute */
363
+ .codehilite .nb { color: #c9d1d9 } /* Name.Builtin */
364
+ .codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */
365
+ .codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */
366
+ .codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */
367
+ .codehilite .ni { color: #ffa657 } /* Name.Entity */
368
+ .codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */
369
+ .codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */
370
+ .codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */
371
+ .codehilite .nn { color: #ff7b72 } /* Name.Namespace */
372
+ .codehilite .nx { color: #c9d1d9 } /* Name.Other */
373
+ .codehilite .py { color: #79c0ff } /* Name.Property */
374
+ .codehilite .nt { color: #7ee787 } /* Name.Tag */
375
+ .codehilite .nv { color: #79c0ff } /* Name.Variable */
376
+ .codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */
377
+ .codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */
378
+ .codehilite .w { color: #6e7681 } /* Text.Whitespace */
379
+ .codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */
380
+ .codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */
381
+ .codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */
382
+ .codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */
383
+ .codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */
384
+ .codehilite .sa { color: #79c0ff } /* Literal.String.Affix */
385
+ .codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */
386
+ .codehilite .sc { color: #a5d6ff } /* Literal.String.Char */
387
+ .codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */
388
+ .codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */
389
+ .codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */
390
+ .codehilite .se { color: #79c0ff } /* Literal.String.Escape */
391
+ .codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */
392
+ .codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */
393
+ .codehilite .sx { color: #a5d6ff } /* Literal.String.Other */
394
+ .codehilite .sr { color: #79c0ff } /* Literal.String.Regex */
395
+ .codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */
396
+ .codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */
397
+ .codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */
398
+ .codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */
399
+ .codehilite .vc { color: #79c0ff } /* Name.Variable.Class */
400
+ .codehilite .vg { color: #79c0ff } /* Name.Variable.Global */
401
+ .codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */
402
+ .codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */
403
+ .codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */
404
+
405
+ .dark .codehilite .hll { background-color: #2C3B41 }
406
+ .dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */
407
+ .dark .codehilite .err { color: #FF5370 } /* Error */
408
+ .dark .codehilite .esc { color: #89DDFF } /* Escape */
409
+ .dark .codehilite .g { color: #EEFFFF } /* Generic */
410
+ .dark .codehilite .k { color: #BB80B3 } /* Keyword */
411
+ .dark .codehilite .l { color: #C3E88D } /* Literal */
412
+ .dark .codehilite .n { color: #EEFFFF } /* Name */
413
+ .dark .codehilite .o { color: #89DDFF } /* Operator */
414
+ .dark .codehilite .p { color: #89DDFF } /* Punctuation */
415
+ .dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */
416
+ .dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */
417
+ .dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */
418
+ .dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */
419
+ .dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */
420
+ .dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */
421
+ .dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */
422
+ .dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */
423
+ .dark .codehilite .gr { color: #FF5370 } /* Generic.Error */
424
+ .dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */
425
+ .dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */
426
+ .dark .codehilite .go { color: #79d618 } /* Generic.Output */
427
+ .dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */
428
+ .dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */
429
+ .dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */
430
+ .dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */
431
+ .dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */
432
+ .dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */
433
+ .dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */
434
+ .dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */
435
+ .dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */
436
+ .dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */
437
+ .dark .codehilite .ld { color: #C3E88D } /* Literal.Date */
438
+ .dark .codehilite .m { color: #F78C6C } /* Literal.Number */
439
+ .dark .codehilite .s { color: #C3E88D } /* Literal.String */
440
+ .dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */
441
+ .dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */
442
+ .dark .codehilite .nc { color: #FFCB6B } /* Name.Class */
443
+ .dark .codehilite .no { color: #EEFFFF } /* Name.Constant */
444
+ .dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */
445
+ .dark .codehilite .ni { color: #89DDFF } /* Name.Entity */
446
+ .dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */
447
+ .dark .codehilite .nf { color: #82AAFF } /* Name.Function */
448
+ .dark .codehilite .nl { color: #82AAFF } /* Name.Label */
449
+ .dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */
450
+ .dark .codehilite .nx { color: #EEFFFF } /* Name.Other */
451
+ .dark .codehilite .py { color: #FFCB6B } /* Name.Property */
452
+ .dark .codehilite .nt { color: #FF5370 } /* Name.Tag */
453
+ .dark .codehilite .nv { color: #89DDFF } /* Name.Variable */
454
+ .dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */
455
+ .dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */
456
+ .dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */
457
+ .dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */
458
+ .dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */
459
+ .dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */
460
+ .dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */
461
+ .dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */
462
+ .dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */
463
+ .dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */
464
+ .dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */
465
+ .dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */
466
+ .dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */
467
+ .dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */
468
+ .dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */
469
+ .dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */
470
+ .dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */
471
+ .dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */
472
+ .dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */
473
+ .dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */
474
+ .dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */
475
+ .dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */
476
+ .dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */
477
+ .dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */
478
+ .dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */
479
+ .dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
480
+ .dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
481
+ .dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */
482
+
themes/contrast.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from toolbox import get_conf
3
+ CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
4
+
5
+ def adjust_theme():
6
+
7
+ try:
8
+ color_er = gr.themes.utils.colors.fuchsia
9
+ set_theme = gr.themes.Default(
10
+ primary_hue=gr.themes.utils.colors.orange,
11
+ neutral_hue=gr.themes.utils.colors.gray,
12
+ font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
13
+ font_mono=["ui-monospace", "Consolas", "monospace"])
14
+ set_theme.set(
15
+ # Colors
16
+ input_background_fill_dark="*neutral_800",
17
+ # Transition
18
+ button_transition="none",
19
+ # Shadows
20
+ button_shadow="*shadow_drop",
21
+ button_shadow_hover="*shadow_drop_lg",
22
+ button_shadow_active="*shadow_inset",
23
+ input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset",
24
+ input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset",
25
+ input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset",
26
+ checkbox_label_shadow="*shadow_drop",
27
+ block_shadow="*shadow_drop",
28
+ form_gap_width="1px",
29
+ # Button borders
30
+ input_border_width="1px",
31
+ input_background_fill="white",
32
+ # Gradients
33
+ stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)",
34
+ stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)",
35
+ error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)",
36
+ error_background_fill_dark="*background_fill_primary",
37
+ checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)",
38
+ checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
39
+ checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)",
40
+ checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
41
+ button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)",
42
+ button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)",
43
+ button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)",
44
+ button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)",
45
+ button_primary_border_color_dark="*primary_500",
46
+ button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)",
47
+ button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)",
48
+ button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)",
49
+ button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)",
50
+ button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})",
51
+ button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})",
52
+ button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})",
53
+ button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})",
54
+ button_cancel_border_color=color_er.c200,
55
+ button_cancel_border_color_dark=color_er.c600,
56
+ button_cancel_text_color=color_er.c600,
57
+ button_cancel_text_color_dark="white",
58
+ )
59
+
60
+ if LAYOUT=="TOP-DOWN":
61
+ js = ""
62
+ else:
63
+ with open('themes/common.js', 'r', encoding='utf8') as f:
64
+ js = f"<script>{f.read()}</script>"
65
+
66
+ # 添加一个萌萌的看板娘
67
+ if ADD_WAIFU:
68
+ js += """
69
+ <script src="file=docs/waifu_plugin/jquery.min.js"></script>
70
+ <script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
71
+ <script src="file=docs/waifu_plugin/autoload.js"></script>
72
+ """
73
+ gradio_original_template_fn = gr.routes.templates.TemplateResponse
74
+ def gradio_new_template_fn(*args, **kwargs):
75
+ res = gradio_original_template_fn(*args, **kwargs)
76
+ res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
77
+ res.init_headers()
78
+ return res
79
+ gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
80
+ except:
81
+ set_theme = None
82
+ print('gradio版本较旧, 不能自定义字体和颜色')
83
+ return set_theme
84
+
85
+ with open("themes/contrast.css", "r", encoding="utf-8") as f:
86
+ advanced_css = f.read()
87
+ with open("themes/common.css", "r", encoding="utf-8") as f:
88
+ advanced_css += f.read()
themes/default.css CHANGED
@@ -1,3 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  .markdown-body table {
2
  margin: 1em 0;
3
  border-collapse: collapse;
@@ -17,6 +60,12 @@
17
  padding: .5em .2em;
18
  }
19
 
 
 
 
 
 
 
20
  .markdown-body ol, .markdown-body ul {
21
  padding-inline-start: 2em !important;
22
  }
 
1
+ .dark {
2
+ --background-fill-primary: #050810;
3
+ --body-background-fill: var(--background-fill-primary);
4
+ }
5
+ /* 插件下拉菜单 */
6
+ #plugin-panel .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
7
+ box-shadow: var(--input-shadow);
8
+ border: var(--input-border-width) dashed var(--border-color-primary);
9
+ border-radius: 4px;
10
+ }
11
+
12
+ #plugin-panel .dropdown-arrow.svelte-p5edak {
13
+ width: 50px;
14
+ }
15
+ #plugin-panel input.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
16
+ padding-left: 5px;
17
+ }
18
+
19
+ /* 小按钮 */
20
+ .sm.svelte-1ipelgc {
21
+ font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
22
+ --button-small-text-weight: 600;
23
+ --button-small-text-size: 16px;
24
+ border-bottom-right-radius: 6px;
25
+ border-bottom-left-radius: 6px;
26
+ border-top-right-radius: 6px;
27
+ border-top-left-radius: 6px;
28
+ }
29
+
30
+ #plugin-panel .sm.svelte-1ipelgc {
31
+ font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
32
+ --button-small-text-weight: 400;
33
+ --button-small-text-size: 14px;
34
+ border-bottom-right-radius: 6px;
35
+ border-bottom-left-radius: 6px;
36
+ border-top-right-radius: 6px;
37
+ border-top-left-radius: 6px;
38
+ }
39
+
40
+ .wrap-inner.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
41
+ padding: 0%;
42
+ }
43
+
44
  .markdown-body table {
45
  margin: 1em 0;
46
  border-collapse: collapse;
 
60
  padding: .5em .2em;
61
  }
62
 
63
+ .normal_mut_select .svelte-1gfkn6j {
64
+ float: left;
65
+ width: auto;
66
+ line-height: 260% !important;
67
+ }
68
+
69
  .markdown-body ol, .markdown-body ul {
70
  padding-inline-start: 2em !important;
71
  }
themes/default.py CHANGED
@@ -9,7 +9,7 @@ def adjust_theme():
9
  set_theme = gr.themes.Default(
10
  primary_hue=gr.themes.utils.colors.orange,
11
  neutral_hue=gr.themes.utils.colors.gray,
12
- font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui"],
13
  font_mono=["ui-monospace", "Consolas", "monospace"])
14
  set_theme.set(
15
  # Colors
@@ -83,4 +83,6 @@ def adjust_theme():
83
  return set_theme
84
 
85
  with open("themes/default.css", "r", encoding="utf-8") as f:
86
- advanced_css = f.read()
 
 
 
9
  set_theme = gr.themes.Default(
10
  primary_hue=gr.themes.utils.colors.orange,
11
  neutral_hue=gr.themes.utils.colors.gray,
12
+ font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
13
  font_mono=["ui-monospace", "Consolas", "monospace"])
14
  set_theme.set(
15
  # Colors
 
83
  return set_theme
84
 
85
  with open("themes/default.css", "r", encoding="utf-8") as f:
86
+ advanced_css = f.read()
87
+ with open("themes/common.css", "r", encoding="utf-8") as f:
88
+ advanced_css += f.read()
themes/green.css CHANGED
@@ -24,7 +24,11 @@ mspace {
24
  border-color: yellow;
25
  }
26
  }
27
-
 
 
 
 
28
  #highlight_update {
29
  animation-name: highlight;
30
  animation-duration: 0.75s;
 
24
  border-color: yellow;
25
  }
26
  }
27
+ .normal_mut_select .svelte-1gfkn6j {
28
+ float: left;
29
+ width: auto;
30
+ line-height: 260% !important;
31
+ }
32
  #highlight_update {
33
  animation-name: highlight;
34
  animation-duration: 0.75s;
themes/green.py CHANGED
@@ -106,3 +106,5 @@ def adjust_theme():
106
 
107
  with open("themes/green.css", "r", encoding="utf-8") as f:
108
  advanced_css = f.read()
 
 
 
106
 
107
  with open("themes/green.css", "r", encoding="utf-8") as f:
108
  advanced_css = f.read()
109
+ with open("themes/common.css", "r", encoding="utf-8") as f:
110
+ advanced_css += f.read()
themes/theme.py CHANGED
@@ -5,6 +5,9 @@ THEME, = get_conf('THEME')
5
  if THEME == 'Chuanhu-Small-and-Beautiful':
6
  from .green import adjust_theme, advanced_css
7
  theme_declaration = "<h2 align=\"center\" class=\"small\">[Chuanhu-Small-and-Beautiful主题]</h2>"
 
 
 
8
  else:
9
  from .default import adjust_theme, advanced_css
10
  theme_declaration = ""
 
5
  if THEME == 'Chuanhu-Small-and-Beautiful':
6
  from .green import adjust_theme, advanced_css
7
  theme_declaration = "<h2 align=\"center\" class=\"small\">[Chuanhu-Small-and-Beautiful主题]</h2>"
8
+ elif THEME == 'High-Contrast':
9
+ from .contrast import adjust_theme, advanced_css
10
+ theme_declaration = ""
11
  else:
12
  from .default import adjust_theme, advanced_css
13
  theme_declaration = ""