Spaces:
Paused
Paused
"version": 3.48
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- Dockerfile +14 -8
- README.md +66 -42
- app.py +54 -31
- check_proxy.py +15 -5
- config.py +88 -29
- core_functional.py +21 -5
- crazy_functions/Langchain知识库.py +1 -1
- crazy_functions/Latex输出PDF结果.py +2 -2
- crazy_functions/chatglm微调工具.py +141 -0
- crazy_functions/crazy_utils.py +6 -1
- crazy_functions/latex_fns/latex_actions.py +447 -0
- crazy_functions/latex_fns/latex_toolbox.py +456 -0
- crazy_functions/live_audio/aliyunASR.py +130 -0
- crazy_functions/live_audio/audio_io.py +51 -0
- crazy_functions/下载arxiv论文翻译摘要.py +2 -2
- crazy_functions/交互功能函数模板.py +63 -0
- crazy_functions/命令行助手.py +31 -0
- crazy_functions/图片生成.py +5 -3
- crazy_functions/对话历史存档.py +1 -1
- crazy_functions/总结word文档.py +13 -11
- crazy_functions/批量Markdown翻译.py +32 -20
- crazy_functions/批量总结PDF文档.py +98 -115
- crazy_functions/批量翻译PDF文档_多线程.py +5 -14
- crazy_functions/虚空终端.py +68 -80
- crazy_functions/询问多个大语言模型.py +6 -4
- crazy_functions/语音助手.py +195 -0
- crazy_functions/谷歌检索小助手.py +1 -1
- crazy_functions/辅助回答.py +28 -0
- crazy_functions/高级功能函数模板.py +5 -28
- docker-compose.yml +39 -21
- docs/Dockerfile+ChatGLM +2 -2
- docs/Dockerfile+JittorLLM +2 -2
- docs/GithubAction+ChatGLM+Moss +2 -2
- docs/GithubAction+JittorLLMs +2 -2
- docs/GithubAction+NoLocal+AudioAssistant +22 -0
- docs/README.md.German.md +12 -12
- docs/README.md.Italian.md +12 -12
- docs/README.md.Korean.md +10 -10
- docs/README.md.Portuguese.md +10 -10
- docs/README_EN.md +12 -12
- docs/README_FR.md +11 -11
- docs/README_JP.md +12 -12
- docs/README_RS.md +11 -11
- docs/translate_english.json +495 -1
- docs/translate_japanese.json +628 -2
- docs/translate_std.json +87 -0
- docs/translate_traditionalchinese.json +829 -66
- docs/use_audio.md +64 -0
- docs/use_azure.md +9 -42
- multi_language.py +15 -10
Dockerfile
CHANGED
@@ -1,28 +1,34 @@
|
|
1 |
-
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm
|
2 |
-
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic .
|
3 |
-
#
|
|
|
4 |
FROM python:3.11
|
5 |
|
|
|
|
|
6 |
RUN echo '[global]' > /etc/pip.conf && \
|
7 |
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
8 |
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
9 |
|
10 |
|
|
|
11 |
WORKDIR /gpt
|
12 |
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
# 安装依赖
|
17 |
COPY requirements.txt ./
|
18 |
COPY ./docs/gradio-3.32.2-py3-none-any.whl ./docs/gradio-3.32.2-py3-none-any.whl
|
19 |
RUN pip3 install -r requirements.txt
|
20 |
-
|
|
|
|
|
21 |
COPY . .
|
22 |
RUN pip3 install -r requirements.txt
|
23 |
|
24 |
-
|
|
|
25 |
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
26 |
|
|
|
27 |
# 启动
|
28 |
CMD ["python3", "-u", "main.py"]
|
|
|
1 |
+
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml
|
2 |
+
# 如何构建: 先修改 `config.py`, 然后 `docker build -t gpt-academic . `
|
3 |
+
# 如何运行(Linux下): `docker run --rm -it --net=host gpt-academic `
|
4 |
+
# 如何运行(其他操作系统,选择任意一个固定端口50923): `docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic `
|
5 |
FROM python:3.11
|
6 |
|
7 |
+
|
8 |
+
# 非必要步骤,更换pip源
|
9 |
RUN echo '[global]' > /etc/pip.conf && \
|
10 |
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
11 |
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
12 |
|
13 |
|
14 |
+
# 进入工作路径
|
15 |
WORKDIR /gpt
|
16 |
|
17 |
|
18 |
+
# 安装大部分依赖,利用Docker缓存加速以后的构建
|
|
|
|
|
19 |
COPY requirements.txt ./
|
20 |
COPY ./docs/gradio-3.32.2-py3-none-any.whl ./docs/gradio-3.32.2-py3-none-any.whl
|
21 |
RUN pip3 install -r requirements.txt
|
22 |
+
|
23 |
+
|
24 |
+
# 装载项目文件,安装剩余依赖
|
25 |
COPY . .
|
26 |
RUN pip3 install -r requirements.txt
|
27 |
|
28 |
+
|
29 |
+
# 非必要步骤,用于预热模块
|
30 |
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
31 |
|
32 |
+
|
33 |
# 启动
|
34 |
CMD ["python3", "-u", "main.py"]
|
README.md
CHANGED
@@ -12,33 +12,34 @@ pinned: false
|
|
12 |
# ChatGPT 学术优化
|
13 |
> **Note**
|
14 |
>
|
15 |
-
> 2023.
|
16 |
-
>
|
17 |
-
> `pip install -r requirements.txt`
|
18 |
>
|
|
|
19 |
|
20 |
-
# <img src="docs/logo.png" width="40" > GPT 学术优化 (GPT Academic)
|
21 |
|
22 |
-
|
|
|
|
|
23 |
|
24 |
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
|
25 |
To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
26 |
|
27 |
> **Note**
|
28 |
>
|
29 |
-
> 1
|
30 |
>
|
31 |
> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)。
|
32 |
>
|
33 |
-
> 3.本项目兼容并鼓励尝试国产大语言模型
|
34 |
|
35 |
|
36 |
|
37 |
|
38 |
<div align="center">
|
39 |
|
40 |
-
|
41 |
--- | ---
|
|
|
42 |
一键润色 | 支持一键润色、一键查找论文语法错误
|
43 |
一键中英互译 | 一键中英互译
|
44 |
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
@@ -53,16 +54,19 @@ Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数
|
|
53 |
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
54 |
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
55 |
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
|
|
56 |
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
57 |
-
互联网信息聚合+GPT | [函数插件] 一键[让GPT
|
58 |
-
⭐Arxiv论文精细翻译 | [函数插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/)
|
|
|
59 |
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
60 |
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
61 |
-
启动暗色
|
62 |
-
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华
|
63 |
-
|
64 |
-
|
65 |
-
|
|
|
66 |
</div>
|
67 |
|
68 |
|
@@ -97,13 +101,12 @@ chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
|
97 |
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
98 |
</div>
|
99 |
|
100 |
-
---
|
101 |
# Installation
|
102 |
-
|
103 |
|
104 |
1. 下载项目
|
105 |
```sh
|
106 |
-
git clone https://github.com/binary-husky/gpt_academic.git
|
107 |
cd gpt_academic
|
108 |
```
|
109 |
|
@@ -126,19 +129,22 @@ python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步
|
|
126 |
```
|
127 |
|
128 |
|
129 |
-
<details><summary>如果需要支持清华
|
130 |
<p>
|
131 |
|
132 |
-
【可选步骤】如果需要支持清华
|
133 |
```sh
|
134 |
-
# 【可选步骤I】支持清华
|
135 |
python -m pip install -r request_llm/requirements_chatglm.txt
|
136 |
|
137 |
# 【可选步骤II】支持复旦MOSS
|
138 |
python -m pip install -r request_llm/requirements_moss.txt
|
139 |
-
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径
|
|
|
|
|
|
|
140 |
|
141 |
-
# 【可选步骤
|
142 |
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
143 |
```
|
144 |
|
@@ -152,24 +158,28 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
|
|
152 |
python main.py
|
153 |
```
|
154 |
|
155 |
-
|
156 |
|
157 |
1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
|
|
|
|
|
|
|
158 |
|
159 |
``` sh
|
160 |
-
git clone https://github.com/binary-husky/gpt_academic.git # 下载项目
|
161 |
cd gpt_academic # 进入路径
|
162 |
nano config.py # 用任意文本编辑器编辑config.py, 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
|
163 |
docker build -t gpt-academic . # 安装
|
164 |
|
165 |
-
|
166 |
docker run --rm -it --net=host gpt-academic
|
167 |
-
|
168 |
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
169 |
```
|
170 |
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用docker-compose获取Latex功能(修改docker-compose.yml,保留方案4并删除其他方案)。
|
171 |
|
172 |
-
2. ChatGPT +
|
|
|
173 |
|
174 |
``` sh
|
175 |
# 修改docker-compose.yml,保留方案2并删除其他方案。修改docker-compose.yml中方案2的配置,参考其中注释即可
|
@@ -177,13 +187,15 @@ docker-compose up
|
|
177 |
```
|
178 |
|
179 |
3. ChatGPT + LLAMA + 盘古 + RWKV(需要熟悉Docker)
|
|
|
|
|
180 |
``` sh
|
181 |
# 修改docker-compose.yml,保留方案3并删除其他方案。修改docker-compose.yml中方案3的配置,参考其中注释即可
|
182 |
docker-compose up
|
183 |
```
|
184 |
|
185 |
|
186 |
-
|
187 |
1. 一键运行脚本。
|
188 |
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。
|
189 |
脚本的贡献来源是[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
@@ -200,17 +212,17 @@ docker-compose up
|
|
200 |
5. 远程云服务器部署(需要云服务器知识与经验)。
|
201 |
请访问[部署wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
202 |
|
203 |
-
6. 使用
|
|
|
|
|
204 |
请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
205 |
|
206 |
-
|
207 |
请访问[FastAPI运行说明](docs/WithFastapi.md)
|
208 |
|
209 |
-
---
|
210 |
-
# Advanced Usage
|
211 |
-
## 自定义新的便捷按钮 / 自定义函数插件
|
212 |
|
213 |
-
|
|
|
214 |
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
|
215 |
例如
|
216 |
```
|
@@ -226,15 +238,15 @@ docker-compose up
|
|
226 |
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
227 |
</div>
|
228 |
|
229 |
-
|
230 |
|
231 |
编写强大的函数插件来执行任何你想得到的和想不到的任务。
|
232 |
本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
|
233 |
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
|
234 |
|
235 |
-
|
236 |
# Latest Update
|
237 |
-
|
238 |
|
239 |
1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件,
|
240 |
另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。
|
@@ -293,10 +305,17 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
|
293 |
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/476f66d9-7716-4537-b5c1-735372c25adb" height="200">
|
294 |
</div>
|
295 |
|
|
|
|
|
|
|
|
|
296 |
|
297 |
|
298 |
-
|
299 |
- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级)
|
|
|
|
|
|
|
300 |
- version 3.4: +arxiv论文翻译、latex论文批改功能
|
301 |
- version 3.3: +互联网信息综合功能
|
302 |
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
|
@@ -317,13 +336,18 @@ gpt_academic开发者QQ群-2:610599535
|
|
317 |
- 某些浏览器翻译插件干扰此软件前端的运行
|
318 |
- 官方Gradio目前有很多兼容性Bug,请务必使用`requirement.txt`安装Gradio
|
319 |
|
320 |
-
|
|
|
|
|
|
|
|
|
|
|
321 |
|
322 |
```
|
323 |
代码中参考了很多其他优秀项目中的设计,顺序不分先后:
|
324 |
|
325 |
-
# 清华
|
326 |
-
https://github.com/THUDM/
|
327 |
|
328 |
# 清华JittorLLMs:
|
329 |
https://github.com/Jittor/JittorLLMs
|
|
|
12 |
# ChatGPT 学术优化
|
13 |
> **Note**
|
14 |
>
|
15 |
+
> 2023.7.8: Gradio, Pydantic依赖调整,已修改 `requirements.txt`。请及时**更新代码**,安装依赖时,请严格选择`requirements.txt`中**指定的版本**
|
|
|
|
|
16 |
>
|
17 |
+
> `pip install -r requirements.txt`
|
18 |
|
|
|
19 |
|
20 |
+
# <div align=center><img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)</div>
|
21 |
+
|
22 |
+
**如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或函数插件,欢迎发pull requests!**
|
23 |
|
24 |
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
|
25 |
To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
26 |
|
27 |
> **Note**
|
28 |
>
|
29 |
+
> 1.请注意只有 **高亮(如红色)** 标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
|
30 |
>
|
31 |
> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)。
|
32 |
>
|
33 |
+
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM和Moss等等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
|
34 |
|
35 |
|
36 |
|
37 |
|
38 |
<div align="center">
|
39 |
|
40 |
+
功能(⭐= 近期新增功能) | 描述
|
41 |
--- | ---
|
42 |
+
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | ⭐阿里达摩院[通义千问](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/)
|
43 |
一键润色 | 支持一键润色、一键查找论文语法错误
|
44 |
一键中英互译 | 一键中英互译
|
45 |
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
|
|
54 |
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
55 |
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
56 |
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
57 |
+
Latex论文一键校对 | [函数插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
58 |
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
59 |
+
互联网信息聚合+GPT | [函数插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
60 |
+
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [函数插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
61 |
+
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [函数插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
62 |
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
63 |
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
64 |
+
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
65 |
+
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
|
66 |
+
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
|
67 |
+
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
68 |
+
⭐[虚空终端](https://github.com/binary-husky/void-terminal)pip包 | 脱离GUI,在Python中直接调用本项目的函数插件(开发中)
|
69 |
+
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
70 |
</div>
|
71 |
|
72 |
|
|
|
101 |
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
102 |
</div>
|
103 |
|
|
|
104 |
# Installation
|
105 |
+
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
106 |
|
107 |
1. 下载项目
|
108 |
```sh
|
109 |
+
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
110 |
cd gpt_academic
|
111 |
```
|
112 |
|
|
|
129 |
```
|
130 |
|
131 |
|
132 |
+
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
133 |
<p>
|
134 |
|
135 |
+
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
136 |
```sh
|
137 |
+
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
138 |
python -m pip install -r request_llm/requirements_chatglm.txt
|
139 |
|
140 |
# 【可选步骤II】支持复旦MOSS
|
141 |
python -m pip install -r request_llm/requirements_moss.txt
|
142 |
+
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径
|
143 |
+
|
144 |
+
# 【可选步骤III】支持RWKV Runner
|
145 |
+
参考wiki:https://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
|
146 |
|
147 |
+
# 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
|
148 |
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
149 |
```
|
150 |
|
|
|
158 |
python main.py
|
159 |
```
|
160 |
|
161 |
+
### 安装方法II:使用Docker
|
162 |
|
163 |
1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
|
164 |
+
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
165 |
+
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
166 |
+
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
167 |
|
168 |
``` sh
|
169 |
+
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
|
170 |
cd gpt_academic # 进入路径
|
171 |
nano config.py # 用任意文本编辑器编辑config.py, 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
|
172 |
docker build -t gpt-academic . # 安装
|
173 |
|
174 |
+
#(最后一步-Linux操作系统)用`--net=host`更方便快捷
|
175 |
docker run --rm -it --net=host gpt-academic
|
176 |
+
#(最后一步-MacOS/Windows操作系统)只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口
|
177 |
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
178 |
```
|
179 |
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用docker-compose获取Latex功能(修改docker-compose.yml,保留方案4并删除其他方案)。
|
180 |
|
181 |
+
2. ChatGPT + ChatGLM2 + MOSS(需要熟悉Docker)
|
182 |
+
[![chatglm](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
183 |
|
184 |
``` sh
|
185 |
# 修改docker-compose.yml,保留方案2并删除其他方案。修改docker-compose.yml中方案2的配置,参考其中注释即可
|
|
|
187 |
```
|
188 |
|
189 |
3. ChatGPT + LLAMA + 盘古 + RWKV(需要熟悉Docker)
|
190 |
+
[![jittorllms](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-jittorllms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-jittorllms.yml)
|
191 |
+
|
192 |
``` sh
|
193 |
# 修改docker-compose.yml,保留方案3并删除其他方案。修改docker-compose.yml中方案3的配置,参考其中注释即可
|
194 |
docker-compose up
|
195 |
```
|
196 |
|
197 |
|
198 |
+
### 安装方法III:其他部署姿势
|
199 |
1. 一键运行脚本。
|
200 |
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。
|
201 |
脚本的贡献来源是[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
|
|
212 |
5. 远程云服务器部署(需要云服务器知识与经验)。
|
213 |
请访问[部署wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
214 |
|
215 |
+
6. 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
|
216 |
+
|
217 |
+
7. 使用WSL2(Windows Subsystem for Linux 子系统)。
|
218 |
请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
219 |
|
220 |
+
8. 如何在二级网址(如`http://localhost/subpath`)下运行。
|
221 |
请访问[FastAPI运行说明](docs/WithFastapi.md)
|
222 |
|
|
|
|
|
|
|
223 |
|
224 |
+
# Advanced Usage
|
225 |
+
### I:自定义新的便捷按钮(学术快捷键)
|
226 |
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
|
227 |
例如
|
228 |
```
|
|
|
238 |
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
239 |
</div>
|
240 |
|
241 |
+
### II:自定义函数插件
|
242 |
|
243 |
编写强大的函数插件来执行任何你想得到的和想不到的任务。
|
244 |
本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
|
245 |
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
|
246 |
|
247 |
+
|
248 |
# Latest Update
|
249 |
+
### I:新功能动态
|
250 |
|
251 |
1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件,
|
252 |
另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。
|
|
|
305 |
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/476f66d9-7716-4537-b5c1-735372c25adb" height="200">
|
306 |
</div>
|
307 |
|
308 |
+
11. 语言、主题切换
|
309 |
+
<div align="center">
|
310 |
+
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/b6799499-b6fb-4f0c-9c8e-1b441872f4e8" width="500" >
|
311 |
+
</div>
|
312 |
|
313 |
|
314 |
+
### II:版本:
|
315 |
- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级)
|
316 |
+
- version 3.46: 支持完全脱手操作的实时语音对话
|
317 |
+
- version 3.45: 支持自定义ChatGLM2微调模型
|
318 |
+
- version 3.44: 正式支持Azure,优化界面易用性
|
319 |
- version 3.4: +arxiv论文翻译、latex论文批改功能
|
320 |
- version 3.3: +互联网信息综合功能
|
321 |
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
|
|
|
336 |
- 某些浏览器翻译插件干扰此软件前端的运行
|
337 |
- 官方Gradio目前有很多兼容性Bug,请务必使用`requirement.txt`安装Gradio
|
338 |
|
339 |
+
### III:主题
|
340 |
+
可以通过修改`THEME`选项(config.py)变更主题
|
341 |
+
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
342 |
+
|
343 |
+
|
344 |
+
### IV:参考与学习
|
345 |
|
346 |
```
|
347 |
代码中参考了很多其他优秀项目中的设计,顺序不分先后:
|
348 |
|
349 |
+
# 清华ChatGLM2-6B:
|
350 |
+
https://github.com/THUDM/ChatGLM2-6B
|
351 |
|
352 |
# 清华JittorLLMs:
|
353 |
https://github.com/Jittor/JittorLLMs
|
app.py
CHANGED
@@ -4,27 +4,30 @@ def main():
|
|
4 |
import subprocess, sys
|
5 |
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'gradio-stable-fork'])
|
6 |
import gradio as gr
|
7 |
-
if gr.__version__ not in ['3.28.3','3.32.3']: assert False, "
|
8 |
from request_llm.bridge_all import predict
|
9 |
-
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith
|
10 |
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
|
11 |
-
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT,
|
12 |
-
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', '
|
13 |
-
|
14 |
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
15 |
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
16 |
if not AUTHENTICATION: AUTHENTICATION = None
|
17 |
|
18 |
from check_proxy import get_current_version
|
|
|
19 |
initial_prompt = "Serve me as a writing and programming assistant."
|
20 |
-
title_html = f"<h1 align=\"center\">
|
21 |
description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)"""
|
22 |
|
23 |
# 问询记录, python 版本建议3.9+(越新越好)
|
24 |
-
import logging
|
25 |
os.makedirs("gpt_log", exist_ok=True)
|
26 |
-
try:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, encoding="utf-8")
|
27 |
-
except:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO)
|
|
|
|
|
28 |
print("所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!")
|
29 |
|
30 |
# 一些普通功能模块
|
@@ -39,7 +42,6 @@ def main():
|
|
39 |
gr.Chatbot.postprocess = format_io
|
40 |
|
41 |
# 做一些外观色彩上的调整
|
42 |
-
from theme import adjust_theme, advanced_css
|
43 |
set_theme = adjust_theme()
|
44 |
|
45 |
# 代理与自动更新
|
@@ -47,24 +49,24 @@ def main():
|
|
47 |
proxy_info = check_proxy(proxies)
|
48 |
|
49 |
gr_L1 = lambda: gr.Row().style()
|
50 |
-
gr_L2 = lambda scale: gr.Column(scale=scale)
|
51 |
if LAYOUT == "TOP-DOWN":
|
52 |
gr_L1 = lambda: DummyWith()
|
53 |
-
gr_L2 = lambda scale: gr.Row()
|
54 |
CHATBOT_HEIGHT /= 2
|
55 |
|
56 |
cancel_handles = []
|
57 |
-
with gr.Blocks(title="
|
58 |
gr.HTML(title_html)
|
59 |
gr.HTML('''<center><a href="https://huggingface.co/spaces/qingxu98/gpt-academic?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>请您打开此页面后务必点击上方的“复制空间”(Duplicate Space)按钮!<font color="#FF00FF">使用时,先在输入框填入API-KEY然后回车。</font><br/>切忌在“复制空间”(Duplicate Space)之前填入API_KEY或进行提问,否则您的API_KEY将极可能被空间所有者攫取!<br/>支持任意数量的OpenAI的密钥和API2D的密钥共存,例如输入"OpenAI密钥1,API2D密钥2",然后提交,即可同时使用两种模型接口。</center>''')
|
60 |
-
cookies = gr.State(
|
61 |
with gr_L1():
|
62 |
-
with gr_L2(scale=2):
|
63 |
-
chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}")
|
64 |
-
chatbot.style(height=CHATBOT_HEIGHT)
|
65 |
history = gr.State([])
|
66 |
-
with gr_L2(scale=1):
|
67 |
-
with gr.Accordion("输入区", open=True) as area_input_primary:
|
68 |
with gr.Row():
|
69 |
txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥,输入多个密钥时,用英文逗号间隔。支持OpenAI密钥和API2D密钥共存。").style(container=False)
|
70 |
with gr.Row():
|
@@ -73,17 +75,20 @@ def main():
|
|
73 |
resetBtn = gr.Button("重置", variant="secondary"); resetBtn.style(size="sm")
|
74 |
stopBtn = gr.Button("停止", variant="secondary"); stopBtn.style(size="sm")
|
75 |
clearBtn = gr.Button("清除", variant="secondary", visible=False); clearBtn.style(size="sm")
|
|
|
|
|
|
|
76 |
with gr.Row():
|
77 |
-
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}")
|
78 |
-
with gr.Accordion("基础功能区", open=True) as area_basic_fn:
|
79 |
with gr.Row():
|
80 |
for k in functional:
|
81 |
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
82 |
variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
|
83 |
functional[k]["Button"] = gr.Button(k, variant=variant)
|
84 |
-
with gr.Accordion("函数插件区", open=True) as area_crazy_fn:
|
85 |
with gr.Row():
|
86 |
-
gr.Markdown("
|
87 |
with gr.Row():
|
88 |
for k in crazy_fns:
|
89 |
if not crazy_fns[k].get("AsButton", True): continue
|
@@ -94,25 +99,25 @@ def main():
|
|
94 |
with gr.Accordion("更多函数插件", open=True):
|
95 |
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
|
96 |
with gr.Row():
|
97 |
-
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False)
|
98 |
with gr.Row():
|
99 |
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
|
100 |
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
|
101 |
with gr.Row():
|
102 |
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
|
103 |
with gr.Row():
|
104 |
-
with gr.Accordion("
|
105 |
file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple")
|
106 |
-
with gr.Accordion("更换模型 & SysPrompt & 交互界面布局", open=(LAYOUT == "TOP-DOWN")):
|
107 |
system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt)
|
108 |
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
109 |
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
110 |
-
max_length_sl = gr.Slider(minimum=256, maximum=
|
111 |
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
|
112 |
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
113 |
|
114 |
gr.Markdown(description)
|
115 |
-
with gr.Accordion("备选输入区", open=True, visible=False) as area_input_secondary:
|
116 |
with gr.Row():
|
117 |
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", label="输入区2").style(container=False)
|
118 |
with gr.Row():
|
@@ -147,6 +152,11 @@ def main():
|
|
147 |
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status])
|
148 |
clearBtn.click(lambda: ("",""), None, [txt, txt2])
|
149 |
clearBtn2.click(lambda: ("",""), None, [txt, txt2])
|
|
|
|
|
|
|
|
|
|
|
150 |
# 基础功能区的回调函数注册
|
151 |
for k in functional:
|
152 |
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
@@ -174,16 +184,29 @@ def main():
|
|
174 |
return {chatbot: gr.update(label="当前模型:"+k)}
|
175 |
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
|
176 |
# 随变按钮的回调函数注册
|
177 |
-
def route(k, *args, **kwargs):
|
178 |
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
|
179 |
-
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
|
180 |
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
181 |
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
|
182 |
cancel_handles.append(click_handle)
|
183 |
# 终止按钮的回调函数注册
|
184 |
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
185 |
stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
186 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
187 |
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
|
188 |
def auto_opentab_delay():
|
189 |
import threading, webbrowser, time
|
|
|
4 |
import subprocess, sys
|
5 |
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'gradio-stable-fork'])
|
6 |
import gradio as gr
|
7 |
+
if gr.__version__ not in ['3.28.3','3.32.3']: assert False, "需要特殊依赖,请务必用 pip install -r requirements.txt 指令安装依赖,详情信息见requirements.txt"
|
8 |
from request_llm.bridge_all import predict
|
9 |
+
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
10 |
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
|
11 |
+
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = \
|
12 |
+
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
13 |
+
ENABLE_AUDIO, AUTO_CLEAR_TXT = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT')
|
14 |
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
15 |
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
16 |
if not AUTHENTICATION: AUTHENTICATION = None
|
17 |
|
18 |
from check_proxy import get_current_version
|
19 |
+
from themes.theme import adjust_theme, advanced_css, theme_declaration
|
20 |
initial_prompt = "Serve me as a writing and programming assistant."
|
21 |
+
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
22 |
description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)"""
|
23 |
|
24 |
# 问询记录, python 版本建议3.9+(越新越好)
|
25 |
+
import logging, uuid
|
26 |
os.makedirs("gpt_log", exist_ok=True)
|
27 |
+
try:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
|
28 |
+
except:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
|
29 |
+
# Disable logging output from the 'httpx' logger
|
30 |
+
logging.getLogger("httpx").setLevel(logging.WARNING)
|
31 |
print("所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!")
|
32 |
|
33 |
# 一些普通功能模块
|
|
|
42 |
gr.Chatbot.postprocess = format_io
|
43 |
|
44 |
# 做一些外观色彩上的调整
|
|
|
45 |
set_theme = adjust_theme()
|
46 |
|
47 |
# 代理与自动更新
|
|
|
49 |
proxy_info = check_proxy(proxies)
|
50 |
|
51 |
gr_L1 = lambda: gr.Row().style()
|
52 |
+
gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id)
|
53 |
if LAYOUT == "TOP-DOWN":
|
54 |
gr_L1 = lambda: DummyWith()
|
55 |
+
gr_L2 = lambda scale, elem_id: gr.Row()
|
56 |
CHATBOT_HEIGHT /= 2
|
57 |
|
58 |
cancel_handles = []
|
59 |
+
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
|
60 |
gr.HTML(title_html)
|
61 |
gr.HTML('''<center><a href="https://huggingface.co/spaces/qingxu98/gpt-academic?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>请您打开此页面后务必点击上方的“复制空间”(Duplicate Space)按钮!<font color="#FF00FF">使用时,先在输入框填入API-KEY然后回车。</font><br/>切忌在“复制空间”(Duplicate Space)之前填入API_KEY或进行提问,否则您的API_KEY将极可能被空间所有者攫取!<br/>支持任意数量的OpenAI的密钥和API2D的密钥共存,例如输入"OpenAI密钥1,API2D密钥2",然后提交,即可同时使用两种模型接口。</center>''')
|
62 |
+
cookies = gr.State(load_chat_cookies())
|
63 |
with gr_L1():
|
64 |
+
with gr_L2(scale=2, elem_id="gpt-chat"):
|
65 |
+
chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}", elem_id="gpt-chatbot")
|
66 |
+
if LAYOUT == "TOP-DOWN": chatbot.style(height=CHATBOT_HEIGHT)
|
67 |
history = gr.State([])
|
68 |
+
with gr_L2(scale=1, elem_id="gpt-panel"):
|
69 |
+
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
|
70 |
with gr.Row():
|
71 |
txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥,输入多个密钥时,用英文逗号间隔。支持OpenAI密钥和API2D密钥共存。").style(container=False)
|
72 |
with gr.Row():
|
|
|
75 |
resetBtn = gr.Button("重置", variant="secondary"); resetBtn.style(size="sm")
|
76 |
stopBtn = gr.Button("停止", variant="secondary"); stopBtn.style(size="sm")
|
77 |
clearBtn = gr.Button("清除", variant="secondary", visible=False); clearBtn.style(size="sm")
|
78 |
+
if ENABLE_AUDIO:
|
79 |
+
with gr.Row():
|
80 |
+
audio_mic = gr.Audio(source="microphone", type="numpy", streaming=True, show_label=False).style(container=False)
|
81 |
with gr.Row():
|
82 |
+
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
|
83 |
+
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
|
84 |
with gr.Row():
|
85 |
for k in functional:
|
86 |
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
87 |
variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
|
88 |
functional[k]["Button"] = gr.Button(k, variant=variant)
|
89 |
+
with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn:
|
90 |
with gr.Row():
|
91 |
+
gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)")
|
92 |
with gr.Row():
|
93 |
for k in crazy_fns:
|
94 |
if not crazy_fns[k].get("AsButton", True): continue
|
|
|
99 |
with gr.Accordion("更多函数插件", open=True):
|
100 |
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
|
101 |
with gr.Row():
|
102 |
+
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False)
|
103 |
with gr.Row():
|
104 |
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
|
105 |
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
|
106 |
with gr.Row():
|
107 |
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
|
108 |
with gr.Row():
|
109 |
+
with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up:
|
110 |
file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple")
|
111 |
+
with gr.Accordion("更换模型 & SysPrompt & 交互界面布局", open=(LAYOUT == "TOP-DOWN"), elem_id="interact-panel"):
|
112 |
system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt)
|
113 |
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
114 |
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
115 |
+
max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",)
|
116 |
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
|
117 |
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
118 |
|
119 |
gr.Markdown(description)
|
120 |
+
with gr.Accordion("备选输入区", open=True, visible=False, elem_id="input-panel2") as area_input_secondary:
|
121 |
with gr.Row():
|
122 |
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", label="输入区2").style(container=False)
|
123 |
with gr.Row():
|
|
|
152 |
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status])
|
153 |
clearBtn.click(lambda: ("",""), None, [txt, txt2])
|
154 |
clearBtn2.click(lambda: ("",""), None, [txt, txt2])
|
155 |
+
if AUTO_CLEAR_TXT:
|
156 |
+
submitBtn.click(lambda: ("",""), None, [txt, txt2])
|
157 |
+
submitBtn2.click(lambda: ("",""), None, [txt, txt2])
|
158 |
+
txt.submit(lambda: ("",""), None, [txt, txt2])
|
159 |
+
txt2.submit(lambda: ("",""), None, [txt, txt2])
|
160 |
# 基础功能区的回调函数注册
|
161 |
for k in functional:
|
162 |
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
|
|
184 |
return {chatbot: gr.update(label="当前模型:"+k)}
|
185 |
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
|
186 |
# 随变按钮的回调函数注册
|
187 |
+
def route(request: gr.Request, k, *args, **kwargs):
|
188 |
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
|
189 |
+
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(request, *args, **kwargs)
|
190 |
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
191 |
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
|
192 |
cancel_handles.append(click_handle)
|
193 |
# 终止按钮的回调函数注册
|
194 |
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
195 |
stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
196 |
+
if ENABLE_AUDIO:
|
197 |
+
from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
|
198 |
+
rad = RealtimeAudioDistribution()
|
199 |
+
def deal_audio(audio, cookies):
|
200 |
+
rad.feed(cookies['uuid'].hex, audio)
|
201 |
+
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
|
202 |
+
|
203 |
+
def init_cookie(cookies, chatbot):
|
204 |
+
# 为每一位访问的用户赋予一个独一无二的uuid编码
|
205 |
+
cookies.update({'uuid': uuid.uuid4()})
|
206 |
+
return cookies
|
207 |
+
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
|
208 |
+
demo.load(lambda: 0, inputs=None, outputs=None, _js='()=>{ChatBotHeight();}')
|
209 |
+
|
210 |
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
|
211 |
def auto_opentab_delay():
|
212 |
import threading, webbrowser, time
|
check_proxy.py
CHANGED
@@ -3,15 +3,20 @@ def check_proxy(proxies):
|
|
3 |
import requests
|
4 |
proxies_https = proxies['https'] if proxies is not None else '无'
|
5 |
try:
|
6 |
-
response = requests.get("https://ipapi.co/json/",
|
7 |
-
proxies=proxies, timeout=4)
|
8 |
data = response.json()
|
9 |
print(f'查询代理的地理位置,返回的结果是{data}')
|
10 |
if 'country_name' in data:
|
11 |
country = data['country_name']
|
12 |
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
13 |
elif 'error' in data:
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
print(result)
|
16 |
return result
|
17 |
except:
|
@@ -19,6 +24,11 @@ def check_proxy(proxies):
|
|
19 |
print(result)
|
20 |
return result
|
21 |
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
def backup_and_download(current_version, remote_version):
|
24 |
"""
|
@@ -115,7 +125,7 @@ def auto_update(raise_error=False):
|
|
115 |
with open('./version', 'r', encoding='utf8') as f:
|
116 |
current_version = f.read()
|
117 |
current_version = json.loads(current_version)['version']
|
118 |
-
if (remote_version - current_version) >= 0.01:
|
119 |
from colorful import print亮黄
|
120 |
print亮黄(
|
121 |
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
@@ -137,7 +147,7 @@ def auto_update(raise_error=False):
|
|
137 |
else:
|
138 |
return
|
139 |
except:
|
140 |
-
msg = '
|
141 |
if raise_error:
|
142 |
from toolbox import trimmed_format_exc
|
143 |
msg += trimmed_format_exc()
|
|
|
3 |
import requests
|
4 |
proxies_https = proxies['https'] if proxies is not None else '无'
|
5 |
try:
|
6 |
+
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
|
|
|
7 |
data = response.json()
|
8 |
print(f'查询代理的地理位置,返回的结果是{data}')
|
9 |
if 'country_name' in data:
|
10 |
country = data['country_name']
|
11 |
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
12 |
elif 'error' in data:
|
13 |
+
alternative = _check_with_backup_source(proxies)
|
14 |
+
if alternative is None:
|
15 |
+
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
16 |
+
else:
|
17 |
+
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
|
18 |
+
else:
|
19 |
+
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
|
20 |
print(result)
|
21 |
return result
|
22 |
except:
|
|
|
24 |
print(result)
|
25 |
return result
|
26 |
|
27 |
+
def _check_with_backup_source(proxies):
|
28 |
+
import random, string, requests
|
29 |
+
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
|
30 |
+
try: return requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()['dns']['geo']
|
31 |
+
except: return None
|
32 |
|
33 |
def backup_and_download(current_version, remote_version):
|
34 |
"""
|
|
|
125 |
with open('./version', 'r', encoding='utf8') as f:
|
126 |
current_version = f.read()
|
127 |
current_version = json.loads(current_version)['version']
|
128 |
+
if (remote_version - current_version) >= 0.01-1e-5:
|
129 |
from colorful import print亮黄
|
130 |
print亮黄(
|
131 |
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
|
|
147 |
else:
|
148 |
return
|
149 |
except:
|
150 |
+
msg = '自动更新程序:已禁用。建议排查:代理网络配置。'
|
151 |
if raise_error:
|
152 |
from toolbox import trimmed_format_exc
|
153 |
msg += trimmed_format_exc()
|
config.py
CHANGED
@@ -1,17 +1,27 @@
|
|
1 |
-
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
|
4 |
|
5 |
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改
|
6 |
USE_PROXY = False
|
7 |
if USE_PROXY:
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
15 |
proxies = {
|
16 |
# [协议]:// [地址] :[端口]
|
17 |
"http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
|
@@ -20,28 +30,40 @@ if USE_PROXY:
|
|
20 |
else:
|
21 |
proxies = None
|
22 |
|
23 |
-
#
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
DEFAULT_WORKER_NUM = 3
|
26 |
|
27 |
|
28 |
-
# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改
|
29 |
# 对话窗的高度
|
30 |
CHATBOT_HEIGHT = 1115
|
31 |
|
|
|
32 |
# 代码高亮
|
33 |
CODE_HIGHLIGHT = True
|
34 |
|
|
|
35 |
# 窗口布局
|
36 |
-
LAYOUT = "LEFT-RIGHT"
|
37 |
-
DARK_MODE = True
|
|
|
38 |
|
39 |
# 发送请求到OpenAI后,等待多久判定为超时
|
40 |
TIMEOUT_SECONDS = 30
|
41 |
|
|
|
42 |
# 网页的端口, -1代表随机端口
|
43 |
WEB_PORT = -1
|
44 |
|
|
|
45 |
# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制
|
46 |
MAX_RETRY = 2
|
47 |
|
@@ -49,34 +71,43 @@ MAX_RETRY = 2
|
|
49 |
LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm"
|
50 |
AVAIL_LLM_MODELS = ["newbing-free", "gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"]
|
51 |
|
|
|
|
|
|
|
|
|
52 |
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
53 |
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
|
|
|
|
54 |
|
55 |
# 设置gradio的并行线程数(不需要修改)
|
56 |
CONCURRENT_COUNT = 100
|
57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
# 加一个live2d装饰
|
59 |
ADD_WAIFU = False
|
60 |
|
|
|
61 |
# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个)
|
62 |
# [("username", "password"), ("username2", "password2"), ...]
|
63 |
AUTHENTICATION = []
|
64 |
|
65 |
-
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
66 |
-
# (高危设置!通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
67 |
-
# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
68 |
-
# 例如 API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://ai.open.com/api/conversation"}
|
69 |
-
API_URL_REDIRECT = {}
|
70 |
|
71 |
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
72 |
CUSTOM_PATH = "/"
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
your bing cookies here
|
79 |
-
"""
|
80 |
|
81 |
# 如果需要使用Slack Claude,使用教程详情见 request_llm/README.md
|
82 |
SLACK_CLAUDE_BOT_ID = ''
|
@@ -84,7 +115,35 @@ SLACK_CLAUDE_USER_TOKEN = ''
|
|
84 |
|
85 |
|
86 |
# 如果需要使用AZURE 详情请见额外文档 docs\use_azure.md
|
87 |
-
AZURE_ENDPOINT = "https
|
88 |
-
AZURE_API_KEY = "填入azure openai api的密钥"
|
89 |
-
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
|
3 |
+
读取优先级:环境变量 > config_private.py > config.py
|
4 |
+
--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
|
5 |
+
All the following configurations also support using environment variables to override,
|
6 |
+
and the environment variable configuration format can be seen in docker-compose.yml.
|
7 |
+
Configuration reading priority: environment variable > config_private.py > config.py
|
8 |
+
"""
|
9 |
+
|
10 |
+
# [step 1]>> API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下,还需要填写组织(格式如org-123456789abcdefghijklmno的),请向下翻,找 API_ORG 设置项
|
11 |
+
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
|
12 |
|
13 |
|
14 |
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改
|
15 |
USE_PROXY = False
|
16 |
if USE_PROXY:
|
17 |
+
"""
|
18 |
+
填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
|
19 |
+
<配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
|
20 |
+
[协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
|
21 |
+
[地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
|
22 |
+
[端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
|
23 |
+
"""
|
24 |
+
# 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5h / http)、地址(localhost)和端口(11284)
|
25 |
proxies = {
|
26 |
# [协议]:// [地址] :[端口]
|
27 |
"http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
|
|
|
30 |
else:
|
31 |
proxies = None
|
32 |
|
33 |
+
# ------------------------------------ 以下配置可以优化体验, 但大部分场合下并不需要修改 ------------------------------------
|
34 |
+
|
35 |
+
# 重新URL重新定向,实现更换API_URL的作用(高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
36 |
+
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
37 |
+
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions"}
|
38 |
+
API_URL_REDIRECT = {}
|
39 |
+
|
40 |
+
|
41 |
+
# 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
|
42 |
+
# 一言以蔽之:免费(5刀)用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview
|
43 |
DEFAULT_WORKER_NUM = 3
|
44 |
|
45 |
|
|
|
46 |
# 对话窗的高度
|
47 |
CHATBOT_HEIGHT = 1115
|
48 |
|
49 |
+
|
50 |
# 代码高亮
|
51 |
CODE_HIGHLIGHT = True
|
52 |
|
53 |
+
|
54 |
# 窗口布局
|
55 |
+
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
|
56 |
+
DARK_MODE = True # 暗色模式 / 亮色模式
|
57 |
+
|
58 |
|
59 |
# 发送请求到OpenAI后,等待多久判定为超时
|
60 |
TIMEOUT_SECONDS = 30
|
61 |
|
62 |
+
|
63 |
# 网页的端口, -1代表随机端口
|
64 |
WEB_PORT = -1
|
65 |
|
66 |
+
|
67 |
# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制
|
68 |
MAX_RETRY = 2
|
69 |
|
|
|
71 |
LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm"
|
72 |
AVAIL_LLM_MODELS = ["newbing-free", "gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"]
|
73 |
|
74 |
+
# ChatGLM(2) Finetune Model Path (如果使用ChatGLM2微调模型,需要把"chatglmft"加入AVAIL_LLM_MODELS中)
|
75 |
+
ChatGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
|
76 |
+
|
77 |
+
|
78 |
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
79 |
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
80 |
+
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
|
81 |
+
|
82 |
|
83 |
# 设置gradio的并行线程数(不需要修改)
|
84 |
CONCURRENT_COUNT = 100
|
85 |
|
86 |
+
|
87 |
+
# 是否在提交时自动清空输入框
|
88 |
+
AUTO_CLEAR_TXT = False
|
89 |
+
|
90 |
+
|
91 |
+
# 色彩主体,可选 ["Default", "Chuanhu-Small-and-Beautiful"]
|
92 |
+
THEME = "Default"
|
93 |
+
|
94 |
+
|
95 |
# 加一个live2d装饰
|
96 |
ADD_WAIFU = False
|
97 |
|
98 |
+
|
99 |
# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个)
|
100 |
# [("username", "password"), ("username2", "password2"), ...]
|
101 |
AUTHENTICATION = []
|
102 |
|
|
|
|
|
|
|
|
|
|
|
103 |
|
104 |
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
105 |
CUSTOM_PATH = "/"
|
106 |
|
107 |
+
|
108 |
+
# 极少数情况下,openai的官方KEY需要伴随组织编码(格式如org-xxxxxxxxxxxxxxxxxxxxxxxx)使用
|
109 |
+
API_ORG = ""
|
110 |
+
|
|
|
|
|
111 |
|
112 |
# 如果需要使用Slack Claude,使用教程详情见 request_llm/README.md
|
113 |
SLACK_CLAUDE_BOT_ID = ''
|
|
|
115 |
|
116 |
|
117 |
# 如果需要使用AZURE 详情请见额外文档 docs\use_azure.md
|
118 |
+
AZURE_ENDPOINT = "https://你亲手写的api名称.openai.azure.com/"
|
119 |
+
AZURE_API_KEY = "填入azure openai api的密钥" # 建议直接在API_KEY处填写,该选项即将被弃用
|
120 |
+
AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.md
|
121 |
+
|
122 |
+
|
123 |
+
# 使用Newbing
|
124 |
+
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
|
125 |
+
NEWBING_COOKIES = """
|
126 |
+
put your new bing cookies here
|
127 |
+
"""
|
128 |
+
|
129 |
+
|
130 |
+
# 阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
|
131 |
+
ENABLE_AUDIO = False
|
132 |
+
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
133 |
+
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
134 |
+
ALIYUN_ACCESSKEY="" # (无需填写)
|
135 |
+
ALIYUN_SECRET="" # (无需填写)
|
136 |
+
|
137 |
+
|
138 |
+
# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
|
139 |
+
XFYUN_APPID = "00000000"
|
140 |
+
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
|
141 |
+
XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
142 |
+
|
143 |
+
|
144 |
+
# Claude API KEY
|
145 |
+
ANTHROPIC_API_KEY = ""
|
146 |
+
|
147 |
+
|
148 |
+
# 自定义API KEY格式
|
149 |
+
CUSTOM_API_KEY_PATTERN = ""
|
core_functional.py
CHANGED
@@ -1,20 +1,25 @@
|
|
1 |
# 'primary' 颜色对应 theme.py 中的 primary_hue
|
2 |
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
|
3 |
# 'stop' 颜色对应 theme.py 中的 color_er
|
4 |
-
|
5 |
from toolbox import clear_line_break
|
6 |
|
7 |
|
8 |
def get_core_functions():
|
9 |
return {
|
10 |
"英语学术润色": {
|
11 |
-
#
|
12 |
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
|
13 |
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
|
14 |
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
|
15 |
-
#
|
16 |
"Suffix": r"",
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
18 |
},
|
19 |
"中文学术润色": {
|
20 |
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
|
@@ -63,6 +68,7 @@ def get_core_functions():
|
|
63 |
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
|
64 |
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
65 |
"Suffix": r"",
|
|
|
66 |
},
|
67 |
"解释代码": {
|
68 |
"Prefix": r"请解释以下代码:" + "\n```\n",
|
@@ -73,6 +79,16 @@ def get_core_functions():
|
|
73 |
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
|
74 |
r"Items need to be transformed:",
|
75 |
"Suffix": r"",
|
76 |
-
"Visible": False,
|
77 |
}
|
78 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# 'primary' 颜色对应 theme.py 中的 primary_hue
|
2 |
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
|
3 |
# 'stop' 颜色对应 theme.py 中的 color_er
|
4 |
+
import importlib
|
5 |
from toolbox import clear_line_break
|
6 |
|
7 |
|
8 |
def get_core_functions():
|
9 |
return {
|
10 |
"英语学术润色": {
|
11 |
+
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
12 |
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
|
13 |
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
|
14 |
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
|
15 |
+
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
16 |
"Suffix": r"",
|
17 |
+
# 按钮颜色 (默认 secondary)
|
18 |
+
"Color": r"secondary",
|
19 |
+
# 按钮是否可见 (默认 True,即可见)
|
20 |
+
"Visible": True,
|
21 |
+
# 是否在触发时清除历史 (默认 False,即不处理之前的对话历史)
|
22 |
+
"AutoClearHistory": False
|
23 |
},
|
24 |
"中文学术润色": {
|
25 |
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
|
|
|
68 |
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
|
69 |
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
70 |
"Suffix": r"",
|
71 |
+
"Visible": False,
|
72 |
},
|
73 |
"解释代码": {
|
74 |
"Prefix": r"请解释以下代码:" + "\n```\n",
|
|
|
79 |
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
|
80 |
r"Items need to be transformed:",
|
81 |
"Suffix": r"",
|
|
|
82 |
}
|
83 |
}
|
84 |
+
|
85 |
+
|
86 |
+
def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
87 |
+
import core_functional
|
88 |
+
importlib.reload(core_functional) # 热更新prompt
|
89 |
+
core_functional = core_functional.get_core_functions()
|
90 |
+
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
91 |
+
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
92 |
+
if core_functional[additional_fn].get("AutoClearHistory", False):
|
93 |
+
history = []
|
94 |
+
return inputs, history
|
crazy_functions/Langchain知识库.py
CHANGED
@@ -30,7 +30,7 @@ def 知识库问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|
30 |
)
|
31 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
32 |
from .crazy_utils import try_install_deps
|
33 |
-
try_install_deps(['zh_langchain==0.2.1'])
|
34 |
|
35 |
# < --------------------读取参数--------------- >
|
36 |
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
|
|
30 |
)
|
31 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
32 |
from .crazy_utils import try_install_deps
|
33 |
+
try_install_deps(['zh_langchain==0.2.1', 'pypinyin'])
|
34 |
|
35 |
# < --------------------读取参数--------------- >
|
36 |
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
crazy_functions/Latex输出PDF结果.py
CHANGED
@@ -157,7 +157,7 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
|
|
157 |
try:
|
158 |
import glob, os, time, subprocess
|
159 |
subprocess.Popen(['pdflatex', '-version'])
|
160 |
-
from .
|
161 |
except Exception as e:
|
162 |
chatbot.append([ f"解析项目: {txt}",
|
163 |
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
@@ -234,7 +234,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
|
|
234 |
try:
|
235 |
import glob, os, time, subprocess
|
236 |
subprocess.Popen(['pdflatex', '-version'])
|
237 |
-
from .
|
238 |
except Exception as e:
|
239 |
chatbot.append([ f"解析项目: {txt}",
|
240 |
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
|
|
157 |
try:
|
158 |
import glob, os, time, subprocess
|
159 |
subprocess.Popen(['pdflatex', '-version'])
|
160 |
+
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
161 |
except Exception as e:
|
162 |
chatbot.append([ f"解析项目: {txt}",
|
163 |
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
|
|
234 |
try:
|
235 |
import glob, os, time, subprocess
|
236 |
subprocess.Popen(['pdflatex', '-version'])
|
237 |
+
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
238 |
except Exception as e:
|
239 |
chatbot.append([ f"解析项目: {txt}",
|
240 |
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
crazy_functions/chatglm微调工具.py
ADDED
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from toolbox import CatchException, update_ui, promote_file_to_downloadzone
|
2 |
+
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
3 |
+
import datetime, json
|
4 |
+
|
5 |
+
def fetch_items(list_of_items, batch_size):
|
6 |
+
for i in range(0, len(list_of_items), batch_size):
|
7 |
+
yield list_of_items[i:i + batch_size]
|
8 |
+
|
9 |
+
def string_to_options(arguments):
|
10 |
+
import argparse
|
11 |
+
import shlex
|
12 |
+
|
13 |
+
# Create an argparse.ArgumentParser instance
|
14 |
+
parser = argparse.ArgumentParser()
|
15 |
+
|
16 |
+
# Add command-line arguments
|
17 |
+
parser.add_argument("--llm_to_learn", type=str, help="LLM model to learn", default="gpt-3.5-turbo")
|
18 |
+
parser.add_argument("--prompt_prefix", type=str, help="Prompt prefix", default='')
|
19 |
+
parser.add_argument("--system_prompt", type=str, help="System prompt", default='')
|
20 |
+
parser.add_argument("--batch", type=int, help="System prompt", default=50)
|
21 |
+
parser.add_argument("--pre_seq_len", type=int, help="pre_seq_len", default=50)
|
22 |
+
parser.add_argument("--learning_rate", type=float, help="learning_rate", default=2e-2)
|
23 |
+
parser.add_argument("--num_gpus", type=int, help="num_gpus", default=1)
|
24 |
+
parser.add_argument("--json_dataset", type=str, help="json_dataset", default="")
|
25 |
+
parser.add_argument("--ptuning_directory", type=str, help="ptuning_directory", default="")
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
# Parse the arguments
|
30 |
+
args = parser.parse_args(shlex.split(arguments))
|
31 |
+
|
32 |
+
return args
|
33 |
+
|
34 |
+
@CatchException
|
35 |
+
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
36 |
+
"""
|
37 |
+
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
38 |
+
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
39 |
+
plugin_kwargs 插件模型的参数
|
40 |
+
chatbot 聊天显示框的句柄,用于显示给用户
|
41 |
+
history 聊天历史,前情提要
|
42 |
+
system_prompt 给gpt的静默提醒
|
43 |
+
web_port 当前软件运行的端口号
|
44 |
+
"""
|
45 |
+
history = [] # 清空历史,以免输入溢出
|
46 |
+
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
47 |
+
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
48 |
+
args = plugin_kwargs.get("advanced_arg", None)
|
49 |
+
if args is None:
|
50 |
+
chatbot.append(("没给定指令", "退出"))
|
51 |
+
yield from update_ui(chatbot=chatbot, history=history); return
|
52 |
+
else:
|
53 |
+
arguments = string_to_options(arguments=args)
|
54 |
+
|
55 |
+
dat = []
|
56 |
+
with open(txt, 'r', encoding='utf8') as f:
|
57 |
+
for line in f.readlines():
|
58 |
+
json_dat = json.loads(line)
|
59 |
+
dat.append(json_dat["content"])
|
60 |
+
|
61 |
+
llm_kwargs['llm_model'] = arguments.llm_to_learn
|
62 |
+
for batch in fetch_items(dat, arguments.batch):
|
63 |
+
res = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
64 |
+
inputs_array=[f"{arguments.prompt_prefix}\n\n{b}" for b in (batch)],
|
65 |
+
inputs_show_user_array=[f"Show Nothing" for _ in (batch)],
|
66 |
+
llm_kwargs=llm_kwargs,
|
67 |
+
chatbot=chatbot,
|
68 |
+
history_array=[[] for _ in (batch)],
|
69 |
+
sys_prompt_array=[arguments.system_prompt for _ in (batch)],
|
70 |
+
max_workers=10 # OpenAI所允许的最大并行过载
|
71 |
+
)
|
72 |
+
|
73 |
+
with open(txt+'.generated.json', 'a+', encoding='utf8') as f:
|
74 |
+
for b, r in zip(batch, res[1::2]):
|
75 |
+
f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n')
|
76 |
+
|
77 |
+
promote_file_to_downloadzone(txt+'.generated.json', rename_file='generated.json', chatbot=chatbot)
|
78 |
+
return
|
79 |
+
|
80 |
+
|
81 |
+
|
82 |
+
@CatchException
|
83 |
+
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
84 |
+
"""
|
85 |
+
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
86 |
+
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
87 |
+
plugin_kwargs 插件模型的参数
|
88 |
+
chatbot 聊天显示框的句柄,用于显示给用户
|
89 |
+
history 聊天历史,前情提要
|
90 |
+
system_prompt 给gpt的静默提醒
|
91 |
+
web_port 当前软件运行的端口号
|
92 |
+
"""
|
93 |
+
import subprocess
|
94 |
+
history = [] # 清空历史,以免输入溢出
|
95 |
+
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
96 |
+
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
97 |
+
args = plugin_kwargs.get("advanced_arg", None)
|
98 |
+
if args is None:
|
99 |
+
chatbot.append(("没给定指令", "退出"))
|
100 |
+
yield from update_ui(chatbot=chatbot, history=history); return
|
101 |
+
else:
|
102 |
+
arguments = string_to_options(arguments=args)
|
103 |
+
|
104 |
+
|
105 |
+
|
106 |
+
pre_seq_len = arguments.pre_seq_len # 128
|
107 |
+
learning_rate = arguments.learning_rate # 2e-2
|
108 |
+
num_gpus = arguments.num_gpus # 1
|
109 |
+
json_dataset = arguments.json_dataset # 't_code.json'
|
110 |
+
ptuning_directory = arguments.ptuning_directory # '/home/hmp/ChatGLM2-6B/ptuning'
|
111 |
+
|
112 |
+
command = f"torchrun --standalone --nnodes=1 --nproc-per-node={num_gpus} main.py \
|
113 |
+
--do_train \
|
114 |
+
--train_file AdvertiseGen/{json_dataset} \
|
115 |
+
--validation_file AdvertiseGen/{json_dataset} \
|
116 |
+
--preprocessing_num_workers 20 \
|
117 |
+
--prompt_column content \
|
118 |
+
--response_column summary \
|
119 |
+
--overwrite_cache \
|
120 |
+
--model_name_or_path THUDM/chatglm2-6b \
|
121 |
+
--output_dir output/clothgen-chatglm2-6b-pt-{pre_seq_len}-{learning_rate} \
|
122 |
+
--overwrite_output_dir \
|
123 |
+
--max_source_length 256 \
|
124 |
+
--max_target_length 256 \
|
125 |
+
--per_device_train_batch_size 1 \
|
126 |
+
--per_device_eval_batch_size 1 \
|
127 |
+
--gradient_accumulation_steps 16 \
|
128 |
+
--predict_with_generate \
|
129 |
+
--max_steps 100 \
|
130 |
+
--logging_steps 10 \
|
131 |
+
--save_steps 20 \
|
132 |
+
--learning_rate {learning_rate} \
|
133 |
+
--pre_seq_len {pre_seq_len} \
|
134 |
+
--quantization_bit 4"
|
135 |
+
|
136 |
+
process = subprocess.Popen(command, shell=True, cwd=ptuning_directory)
|
137 |
+
try:
|
138 |
+
process.communicate(timeout=3600*24)
|
139 |
+
except subprocess.TimeoutExpired:
|
140 |
+
process.kill()
|
141 |
+
return
|
crazy_functions/crazy_utils.py
CHANGED
@@ -130,6 +130,11 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
|
130 |
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
|
131 |
return final_result
|
132 |
|
|
|
|
|
|
|
|
|
|
|
133 |
|
134 |
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
135 |
inputs_array, inputs_show_user_array, llm_kwargs,
|
@@ -175,7 +180,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|
175 |
except: max_workers = 8
|
176 |
if max_workers <= 0: max_workers = 3
|
177 |
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
178 |
-
if not (llm_kwargs['llm_model']
|
179 |
max_workers = 1
|
180 |
|
181 |
executor = ThreadPoolExecutor(max_workers=max_workers)
|
|
|
130 |
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
|
131 |
return final_result
|
132 |
|
133 |
+
def can_multi_process(llm):
|
134 |
+
if llm.startswith('gpt-'): return True
|
135 |
+
if llm.startswith('api2d-'): return True
|
136 |
+
if llm.startswith('azure-'): return True
|
137 |
+
return False
|
138 |
|
139 |
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
140 |
inputs_array, inputs_show_user_array, llm_kwargs,
|
|
|
180 |
except: max_workers = 8
|
181 |
if max_workers <= 0: max_workers = 3
|
182 |
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
183 |
+
if not can_multi_process(llm_kwargs['llm_model']):
|
184 |
max_workers = 1
|
185 |
|
186 |
executor = ThreadPoolExecutor(max_workers=max_workers)
|
crazy_functions/latex_fns/latex_actions.py
ADDED
@@ -0,0 +1,447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from toolbox import update_ui, update_ui_lastest_msg # 刷新Gradio前端界面
|
2 |
+
from toolbox import zip_folder, objdump, objload, promote_file_to_downloadzone
|
3 |
+
from .latex_toolbox import PRESERVE, TRANSFORM
|
4 |
+
from .latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace
|
5 |
+
from .latex_toolbox import reverse_forbidden_text_careful_brace, reverse_forbidden_text, convert_to_linklist, post_process
|
6 |
+
from .latex_toolbox import fix_content, find_main_tex_file, merge_tex_files, compile_latex_with_timeout
|
7 |
+
|
8 |
+
import os, shutil
|
9 |
+
import re
|
10 |
+
import numpy as np
|
11 |
+
|
12 |
+
pj = os.path.join
|
13 |
+
|
14 |
+
|
15 |
+
def split_subprocess(txt, project_folder, return_dict, opts):
|
16 |
+
"""
|
17 |
+
break down latex file to a linked list,
|
18 |
+
each node use a preserve flag to indicate whether it should
|
19 |
+
be proccessed by GPT.
|
20 |
+
"""
|
21 |
+
text = txt
|
22 |
+
mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM
|
23 |
+
|
24 |
+
# 吸收title与作者以上的部分
|
25 |
+
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\maketitle", re.DOTALL)
|
26 |
+
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\begin{document}", re.DOTALL)
|
27 |
+
# 吸收iffalse注释
|
28 |
+
text, mask = set_forbidden_text(text, mask, r"\\iffalse(.*?)\\fi", re.DOTALL)
|
29 |
+
# 吸收在42行以内的begin-end组合
|
30 |
+
text, mask = set_forbidden_text_begin_end(text, mask, r"\\begin\{([a-z\*]*)\}(.*?)\\end\{\1\}", re.DOTALL, limit_n_lines=42)
|
31 |
+
# 吸收匿名公式
|
32 |
+
text, mask = set_forbidden_text(text, mask, [ r"\$\$([^$]+)\$\$", r"\\\[.*?\\\]" ], re.DOTALL)
|
33 |
+
# 吸收其他杂项
|
34 |
+
text, mask = set_forbidden_text(text, mask, [ r"\\section\{(.*?)\}", r"\\section\*\{(.*?)\}", r"\\subsection\{(.*?)\}", r"\\subsubsection\{(.*?)\}" ])
|
35 |
+
text, mask = set_forbidden_text(text, mask, [ r"\\bibliography\{(.*?)\}", r"\\bibliographystyle\{(.*?)\}" ])
|
36 |
+
text, mask = set_forbidden_text(text, mask, r"\\begin\{thebibliography\}.*?\\end\{thebibliography\}", re.DOTALL)
|
37 |
+
text, mask = set_forbidden_text(text, mask, r"\\begin\{lstlisting\}(.*?)\\end\{lstlisting\}", re.DOTALL)
|
38 |
+
text, mask = set_forbidden_text(text, mask, r"\\begin\{wraptable\}(.*?)\\end\{wraptable\}", re.DOTALL)
|
39 |
+
text, mask = set_forbidden_text(text, mask, r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}", re.DOTALL)
|
40 |
+
text, mask = set_forbidden_text(text, mask, [r"\\begin\{wrapfigure\}(.*?)\\end\{wrapfigure\}", r"\\begin\{wrapfigure\*\}(.*?)\\end\{wrapfigure\*\}"], re.DOTALL)
|
41 |
+
text, mask = set_forbidden_text(text, mask, [r"\\begin\{figure\}(.*?)\\end\{figure\}", r"\\begin\{figure\*\}(.*?)\\end\{figure\*\}"], re.DOTALL)
|
42 |
+
text, mask = set_forbidden_text(text, mask, [r"\\begin\{multline\}(.*?)\\end\{multline\}", r"\\begin\{multline\*\}(.*?)\\end\{multline\*\}"], re.DOTALL)
|
43 |
+
text, mask = set_forbidden_text(text, mask, [r"\\begin\{table\}(.*?)\\end\{table\}", r"\\begin\{table\*\}(.*?)\\end\{table\*\}"], re.DOTALL)
|
44 |
+
text, mask = set_forbidden_text(text, mask, [r"\\begin\{minipage\}(.*?)\\end\{minipage\}", r"\\begin\{minipage\*\}(.*?)\\end\{minipage\*\}"], re.DOTALL)
|
45 |
+
text, mask = set_forbidden_text(text, mask, [r"\\begin\{align\*\}(.*?)\\end\{align\*\}", r"\\begin\{align\}(.*?)\\end\{align\}"], re.DOTALL)
|
46 |
+
text, mask = set_forbidden_text(text, mask, [r"\\begin\{equation\}(.*?)\\end\{equation\}", r"\\begin\{equation\*\}(.*?)\\end\{equation\*\}"], re.DOTALL)
|
47 |
+
text, mask = set_forbidden_text(text, mask, [r"\\includepdf\[(.*?)\]\{(.*?)\}", r"\\clearpage", r"\\newpage", r"\\appendix", r"\\tableofcontents", r"\\include\{(.*?)\}"])
|
48 |
+
text, mask = set_forbidden_text(text, mask, [r"\\vspace\{(.*?)\}", r"\\hspace\{(.*?)\}", r"\\label\{(.*?)\}", r"\\begin\{(.*?)\}", r"\\end\{(.*?)\}", r"\\item "])
|
49 |
+
text, mask = set_forbidden_text_careful_brace(text, mask, r"\\hl\{(.*?)\}", re.DOTALL)
|
50 |
+
# reverse 操作必须放在最后
|
51 |
+
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\caption\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
52 |
+
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\abstract\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
53 |
+
text, mask = reverse_forbidden_text(text, mask, r"\\begin\{abstract\}(.*?)\\end\{abstract\}", re.DOTALL, forbid_wrapper=True)
|
54 |
+
root = convert_to_linklist(text, mask)
|
55 |
+
|
56 |
+
# 最后一步处理,增强稳健性
|
57 |
+
root = post_process(root)
|
58 |
+
|
59 |
+
# 输出html调试文件,用红色标注处保留区(PRESERVE),用黑色标注转换区(TRANSFORM)
|
60 |
+
with open(pj(project_folder, 'debug_log.html'), 'w', encoding='utf8') as f:
|
61 |
+
segment_parts_for_gpt = []
|
62 |
+
nodes = []
|
63 |
+
node = root
|
64 |
+
while True:
|
65 |
+
nodes.append(node)
|
66 |
+
show_html = node.string.replace('\n','<br/>')
|
67 |
+
if not node.preserve:
|
68 |
+
segment_parts_for_gpt.append(node.string)
|
69 |
+
f.write(f'<p style="color:black;">#{node.range}{show_html}#</p>')
|
70 |
+
else:
|
71 |
+
f.write(f'<p style="color:red;">{show_html}</p>')
|
72 |
+
node = node.next
|
73 |
+
if node is None: break
|
74 |
+
|
75 |
+
for n in nodes: n.next = None # break
|
76 |
+
return_dict['nodes'] = nodes
|
77 |
+
return_dict['segment_parts_for_gpt'] = segment_parts_for_gpt
|
78 |
+
return return_dict
|
79 |
+
|
80 |
+
class LatexPaperSplit():
|
81 |
+
"""
|
82 |
+
break down latex file to a linked list,
|
83 |
+
each node use a preserve flag to indicate whether it should
|
84 |
+
be proccessed by GPT.
|
85 |
+
"""
|
86 |
+
def __init__(self) -> None:
|
87 |
+
self.nodes = None
|
88 |
+
self.msg = "*{\\scriptsize\\textbf{警告:该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成," + \
|
89 |
+
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
|
90 |
+
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
|
91 |
+
# 请您不要删除或修改这行警告,除非您是论文的原作者(如果您是论文原作者,欢迎加REAME中的QQ联系开发者)
|
92 |
+
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
|
93 |
+
|
94 |
+
|
95 |
+
def merge_result(self, arr, mode, msg, buggy_lines=[], buggy_line_surgery_n_lines=10):
|
96 |
+
"""
|
97 |
+
Merge the result after the GPT process completed
|
98 |
+
"""
|
99 |
+
result_string = ""
|
100 |
+
node_cnt = 0
|
101 |
+
line_cnt = 0
|
102 |
+
|
103 |
+
for node in self.nodes:
|
104 |
+
if node.preserve:
|
105 |
+
line_cnt += node.string.count('\n')
|
106 |
+
result_string += node.string
|
107 |
+
else:
|
108 |
+
translated_txt = fix_content(arr[node_cnt], node.string)
|
109 |
+
begin_line = line_cnt
|
110 |
+
end_line = line_cnt + translated_txt.count('\n')
|
111 |
+
|
112 |
+
# reverse translation if any error
|
113 |
+
if any([begin_line-buggy_line_surgery_n_lines <= b_line <= end_line+buggy_line_surgery_n_lines for b_line in buggy_lines]):
|
114 |
+
translated_txt = node.string
|
115 |
+
|
116 |
+
result_string += translated_txt
|
117 |
+
node_cnt += 1
|
118 |
+
line_cnt += translated_txt.count('\n')
|
119 |
+
|
120 |
+
if mode == 'translate_zh':
|
121 |
+
pattern = re.compile(r'\\begin\{abstract\}.*\n')
|
122 |
+
match = pattern.search(result_string)
|
123 |
+
if not match:
|
124 |
+
# match \abstract{xxxx}
|
125 |
+
pattern_compile = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
126 |
+
match = pattern_compile.search(result_string)
|
127 |
+
position = match.regs[1][0]
|
128 |
+
else:
|
129 |
+
# match \begin{abstract}xxxx\end{abstract}
|
130 |
+
position = match.end()
|
131 |
+
result_string = result_string[:position] + self.msg + msg + self.msg_declare + result_string[position:]
|
132 |
+
return result_string
|
133 |
+
|
134 |
+
|
135 |
+
def split(self, txt, project_folder, opts):
|
136 |
+
"""
|
137 |
+
break down latex file to a linked list,
|
138 |
+
each node use a preserve flag to indicate whether it should
|
139 |
+
be proccessed by GPT.
|
140 |
+
P.S. use multiprocessing to avoid timeout error
|
141 |
+
"""
|
142 |
+
import multiprocessing
|
143 |
+
manager = multiprocessing.Manager()
|
144 |
+
return_dict = manager.dict()
|
145 |
+
p = multiprocessing.Process(
|
146 |
+
target=split_subprocess,
|
147 |
+
args=(txt, project_folder, return_dict, opts))
|
148 |
+
p.start()
|
149 |
+
p.join()
|
150 |
+
p.close()
|
151 |
+
self.nodes = return_dict['nodes']
|
152 |
+
self.sp = return_dict['segment_parts_for_gpt']
|
153 |
+
return self.sp
|
154 |
+
|
155 |
+
|
156 |
+
class LatexPaperFileGroup():
|
157 |
+
"""
|
158 |
+
use tokenizer to break down text according to max_token_limit
|
159 |
+
"""
|
160 |
+
def __init__(self):
|
161 |
+
self.file_paths = []
|
162 |
+
self.file_contents = []
|
163 |
+
self.sp_file_contents = []
|
164 |
+
self.sp_file_index = []
|
165 |
+
self.sp_file_tag = []
|
166 |
+
|
167 |
+
# count_token
|
168 |
+
from request_llm.bridge_all import model_info
|
169 |
+
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
170 |
+
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
171 |
+
self.get_token_num = get_token_num
|
172 |
+
|
173 |
+
def run_file_split(self, max_token_limit=1900):
|
174 |
+
"""
|
175 |
+
use tokenizer to break down text according to max_token_limit
|
176 |
+
"""
|
177 |
+
for index, file_content in enumerate(self.file_contents):
|
178 |
+
if self.get_token_num(file_content) < max_token_limit:
|
179 |
+
self.sp_file_contents.append(file_content)
|
180 |
+
self.sp_file_index.append(index)
|
181 |
+
self.sp_file_tag.append(self.file_paths[index])
|
182 |
+
else:
|
183 |
+
from ..crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
184 |
+
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
185 |
+
for j, segment in enumerate(segments):
|
186 |
+
self.sp_file_contents.append(segment)
|
187 |
+
self.sp_file_index.append(index)
|
188 |
+
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
189 |
+
print('Segmentation: done')
|
190 |
+
|
191 |
+
def merge_result(self):
|
192 |
+
self.file_result = ["" for _ in range(len(self.file_paths))]
|
193 |
+
for r, k in zip(self.sp_file_result, self.sp_file_index):
|
194 |
+
self.file_result[k] += r
|
195 |
+
|
196 |
+
def write_result(self):
|
197 |
+
manifest = []
|
198 |
+
for path, res in zip(self.file_paths, self.file_result):
|
199 |
+
with open(path + '.polish.tex', 'w', encoding='utf8') as f:
|
200 |
+
manifest.append(path + '.polish.tex')
|
201 |
+
f.write(res)
|
202 |
+
return manifest
|
203 |
+
|
204 |
+
|
205 |
+
def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, mode='proofread', switch_prompt=None, opts=[]):
|
206 |
+
import time, os, re
|
207 |
+
from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
208 |
+
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
|
209 |
+
|
210 |
+
# <-------- 寻找主tex文件 ---------->
|
211 |
+
maintex = find_main_tex_file(file_manifest, mode)
|
212 |
+
chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果:该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
|
213 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
214 |
+
time.sleep(3)
|
215 |
+
|
216 |
+
# <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
|
217 |
+
main_tex_basename = os.path.basename(maintex)
|
218 |
+
assert main_tex_basename.endswith('.tex')
|
219 |
+
main_tex_basename_bare = main_tex_basename[:-4]
|
220 |
+
may_exist_bbl = pj(project_folder, f'{main_tex_basename_bare}.bbl')
|
221 |
+
if os.path.exists(may_exist_bbl):
|
222 |
+
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge.bbl'))
|
223 |
+
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_{mode}.bbl'))
|
224 |
+
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_diff.bbl'))
|
225 |
+
|
226 |
+
with open(maintex, 'r', encoding='utf-8', errors='replace') as f:
|
227 |
+
content = f.read()
|
228 |
+
merged_content = merge_tex_files(project_folder, content, mode)
|
229 |
+
|
230 |
+
with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
|
231 |
+
f.write(merged_content)
|
232 |
+
|
233 |
+
# <-------- 精细切分latex文件 ---------->
|
234 |
+
chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
|
235 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
236 |
+
lps = LatexPaperSplit()
|
237 |
+
res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
|
238 |
+
|
239 |
+
# <-------- 拆分过长的latex片段 ---------->
|
240 |
+
pfg = LatexPaperFileGroup()
|
241 |
+
for index, r in enumerate(res):
|
242 |
+
pfg.file_paths.append('segment-' + str(index))
|
243 |
+
pfg.file_contents.append(r)
|
244 |
+
|
245 |
+
pfg.run_file_split(max_token_limit=1024)
|
246 |
+
n_split = len(pfg.sp_file_contents)
|
247 |
+
|
248 |
+
# <-------- 根据需要切换prompt ---------->
|
249 |
+
inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
|
250 |
+
inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
|
251 |
+
|
252 |
+
if os.path.exists(pj(project_folder,'temp.pkl')):
|
253 |
+
|
254 |
+
# <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
|
255 |
+
pfg = objload(file=pj(project_folder,'temp.pkl'))
|
256 |
+
|
257 |
+
else:
|
258 |
+
# <-------- gpt 多线程请求 ---------->
|
259 |
+
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
260 |
+
inputs_array=inputs_array,
|
261 |
+
inputs_show_user_array=inputs_show_user_array,
|
262 |
+
llm_kwargs=llm_kwargs,
|
263 |
+
chatbot=chatbot,
|
264 |
+
history_array=[[""] for _ in range(n_split)],
|
265 |
+
sys_prompt_array=sys_prompt_array,
|
266 |
+
# max_workers=5, # 并行任务数量限制, 最多同时执行5个, 其他的排队等待
|
267 |
+
scroller_max_len = 40
|
268 |
+
)
|
269 |
+
|
270 |
+
# <-------- 文本碎片重组为完整的tex片段 ---------->
|
271 |
+
pfg.sp_file_result = []
|
272 |
+
for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
|
273 |
+
pfg.sp_file_result.append(gpt_say)
|
274 |
+
pfg.merge_result()
|
275 |
+
|
276 |
+
# <-------- 临时存储用于调试 ---------->
|
277 |
+
pfg.get_token_num = None
|
278 |
+
objdump(pfg, file=pj(project_folder,'temp.pkl'))
|
279 |
+
|
280 |
+
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
|
281 |
+
|
282 |
+
# <-------- 写出文件 ---------->
|
283 |
+
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}。"
|
284 |
+
final_tex = lps.merge_result(pfg.file_result, mode, msg)
|
285 |
+
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
|
286 |
+
|
287 |
+
with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
|
288 |
+
if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
|
289 |
+
|
290 |
+
|
291 |
+
# <-------- 整理结果, 退出 ---------->
|
292 |
+
chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF'))
|
293 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
294 |
+
|
295 |
+
# <-------- 返回 ---------->
|
296 |
+
return project_folder + f'/merge_{mode}.tex'
|
297 |
+
|
298 |
+
|
299 |
+
def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_fix, work_folder_modified, fixed_line=[]):
|
300 |
+
try:
|
301 |
+
with open(log_path, 'r', encoding='utf-8', errors='replace') as f:
|
302 |
+
log = f.read()
|
303 |
+
import re
|
304 |
+
buggy_lines = re.findall(tex_name+':([0-9]{1,5}):', log)
|
305 |
+
buggy_lines = [int(l) for l in buggy_lines]
|
306 |
+
buggy_lines = sorted(buggy_lines)
|
307 |
+
buggy_line = buggy_lines[0]-1
|
308 |
+
print("reversing tex line that has errors", buggy_line)
|
309 |
+
|
310 |
+
# 重组,逆转出错的段落
|
311 |
+
if buggy_line not in fixed_line:
|
312 |
+
fixed_line.append(buggy_line)
|
313 |
+
|
314 |
+
lps, file_result, mode, msg = objload(file=pj(work_folder_modified,'merge_result.pkl'))
|
315 |
+
final_tex = lps.merge_result(file_result, mode, msg, buggy_lines=fixed_line, buggy_line_surgery_n_lines=5*n_fix)
|
316 |
+
|
317 |
+
with open(pj(work_folder_modified, f"{tex_name_pure}_fix_{n_fix}.tex"), 'w', encoding='utf-8', errors='replace') as f:
|
318 |
+
f.write(final_tex)
|
319 |
+
|
320 |
+
return True, f"{tex_name_pure}_fix_{n_fix}", buggy_lines
|
321 |
+
except:
|
322 |
+
print("Fatal error occurred, but we cannot identify error, please download zip, read latex log, and compile manually.")
|
323 |
+
return False, -1, [-1]
|
324 |
+
|
325 |
+
|
326 |
+
def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_folder_original, work_folder_modified, work_folder, mode='default'):
|
327 |
+
import os, time
|
328 |
+
n_fix = 1
|
329 |
+
fixed_line = []
|
330 |
+
max_try = 32
|
331 |
+
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
|
332 |
+
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
|
333 |
+
yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
|
334 |
+
|
335 |
+
while True:
|
336 |
+
import os
|
337 |
+
may_exist_bbl = pj(work_folder_modified, f'merge.bbl')
|
338 |
+
target_bbl = pj(work_folder_modified, f'{main_file_modified}.bbl')
|
339 |
+
if os.path.exists(may_exist_bbl) and not os.path.exists(target_bbl):
|
340 |
+
shutil.copyfile(may_exist_bbl, target_bbl)
|
341 |
+
|
342 |
+
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
|
343 |
+
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
|
344 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
|
345 |
+
|
346 |
+
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
|
347 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
|
348 |
+
|
349 |
+
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
|
350 |
+
# 只有第二步成功,才能继续下面的步骤
|
351 |
+
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
|
352 |
+
if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')):
|
353 |
+
ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original)
|
354 |
+
if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')):
|
355 |
+
ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
|
356 |
+
|
357 |
+
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
|
358 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
|
359 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
|
360 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
|
361 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
|
362 |
+
|
363 |
+
if mode!='translate_zh':
|
364 |
+
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
|
365 |
+
print( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
|
366 |
+
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
|
367 |
+
|
368 |
+
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
|
369 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
|
370 |
+
ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
|
371 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
|
372 |
+
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
|
373 |
+
|
374 |
+
# <---------- 检查结果 ----------->
|
375 |
+
results_ = ""
|
376 |
+
original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
|
377 |
+
modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
|
378 |
+
diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
|
379 |
+
results_ += f"原始PDF编译是否成功: {original_pdf_success};"
|
380 |
+
results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
|
381 |
+
results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
|
382 |
+
yield from update_ui_lastest_msg(f'第{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
|
383 |
+
|
384 |
+
if diff_pdf_success:
|
385 |
+
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
|
386 |
+
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
387 |
+
if modified_pdf_success:
|
388 |
+
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面
|
389 |
+
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
|
390 |
+
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
|
391 |
+
if os.path.exists(pj(work_folder, '..', 'translation')):
|
392 |
+
shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
|
393 |
+
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
394 |
+
# 将两个PDF拼接
|
395 |
+
if original_pdf_success:
|
396 |
+
try:
|
397 |
+
from .latex_toolbox import merge_pdfs
|
398 |
+
concat_pdf = pj(work_folder_modified, f'comparison.pdf')
|
399 |
+
merge_pdfs(origin_pdf, result_pdf, concat_pdf)
|
400 |
+
promote_file_to_downloadzone(concat_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
401 |
+
except Exception as e:
|
402 |
+
pass
|
403 |
+
return True # 成功啦
|
404 |
+
else:
|
405 |
+
if n_fix>=max_try: break
|
406 |
+
n_fix += 1
|
407 |
+
can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
|
408 |
+
file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
|
409 |
+
log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
|
410 |
+
tex_name=f'{main_file_modified}.tex',
|
411 |
+
tex_name_pure=f'{main_file_modified}',
|
412 |
+
n_fix=n_fix,
|
413 |
+
work_folder_modified=work_folder_modified,
|
414 |
+
fixed_line=fixed_line
|
415 |
+
)
|
416 |
+
yield from update_ui_lastest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面
|
417 |
+
if not can_retry: break
|
418 |
+
|
419 |
+
return False # 失败啦
|
420 |
+
|
421 |
+
|
422 |
+
def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
|
423 |
+
# write html
|
424 |
+
try:
|
425 |
+
import shutil
|
426 |
+
from ..crazy_utils import construct_html
|
427 |
+
from toolbox import gen_time_str
|
428 |
+
ch = construct_html()
|
429 |
+
orig = ""
|
430 |
+
trans = ""
|
431 |
+
final = []
|
432 |
+
for c,r in zip(sp_file_contents, sp_file_result):
|
433 |
+
final.append(c)
|
434 |
+
final.append(r)
|
435 |
+
for i, k in enumerate(final):
|
436 |
+
if i%2==0:
|
437 |
+
orig = k
|
438 |
+
if i%2==1:
|
439 |
+
trans = k
|
440 |
+
ch.add_row(a=orig, b=trans)
|
441 |
+
create_report_file_name = f"{gen_time_str()}.trans.html"
|
442 |
+
ch.save_file(create_report_file_name)
|
443 |
+
shutil.copyfile(pj('./gpt_log/', create_report_file_name), pj(project_folder, create_report_file_name))
|
444 |
+
promote_file_to_downloadzone(file=f'./gpt_log/{create_report_file_name}', chatbot=chatbot)
|
445 |
+
except:
|
446 |
+
from toolbox import trimmed_format_exc
|
447 |
+
print('writing html result failed:', trimmed_format_exc())
|
crazy_functions/latex_fns/latex_toolbox.py
ADDED
@@ -0,0 +1,456 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os, shutil
|
2 |
+
import re
|
3 |
+
import numpy as np
|
4 |
+
PRESERVE = 0
|
5 |
+
TRANSFORM = 1
|
6 |
+
|
7 |
+
pj = os.path.join
|
8 |
+
|
9 |
+
class LinkedListNode():
|
10 |
+
"""
|
11 |
+
Linked List Node
|
12 |
+
"""
|
13 |
+
def __init__(self, string, preserve=True) -> None:
|
14 |
+
self.string = string
|
15 |
+
self.preserve = preserve
|
16 |
+
self.next = None
|
17 |
+
self.range = None
|
18 |
+
# self.begin_line = 0
|
19 |
+
# self.begin_char = 0
|
20 |
+
|
21 |
+
def convert_to_linklist(text, mask):
|
22 |
+
root = LinkedListNode("", preserve=True)
|
23 |
+
current_node = root
|
24 |
+
for c, m, i in zip(text, mask, range(len(text))):
|
25 |
+
if (m==PRESERVE and current_node.preserve) \
|
26 |
+
or (m==TRANSFORM and not current_node.preserve):
|
27 |
+
# add
|
28 |
+
current_node.string += c
|
29 |
+
else:
|
30 |
+
current_node.next = LinkedListNode(c, preserve=(m==PRESERVE))
|
31 |
+
current_node = current_node.next
|
32 |
+
return root
|
33 |
+
|
34 |
+
def post_process(root):
|
35 |
+
# 修复括号
|
36 |
+
node = root
|
37 |
+
while True:
|
38 |
+
string = node.string
|
39 |
+
if node.preserve:
|
40 |
+
node = node.next
|
41 |
+
if node is None: break
|
42 |
+
continue
|
43 |
+
def break_check(string):
|
44 |
+
str_stack = [""] # (lv, index)
|
45 |
+
for i, c in enumerate(string):
|
46 |
+
if c == '{':
|
47 |
+
str_stack.append('{')
|
48 |
+
elif c == '}':
|
49 |
+
if len(str_stack) == 1:
|
50 |
+
print('stack fix')
|
51 |
+
return i
|
52 |
+
str_stack.pop(-1)
|
53 |
+
else:
|
54 |
+
str_stack[-1] += c
|
55 |
+
return -1
|
56 |
+
bp = break_check(string)
|
57 |
+
|
58 |
+
if bp == -1:
|
59 |
+
pass
|
60 |
+
elif bp == 0:
|
61 |
+
node.string = string[:1]
|
62 |
+
q = LinkedListNode(string[1:], False)
|
63 |
+
q.next = node.next
|
64 |
+
node.next = q
|
65 |
+
else:
|
66 |
+
node.string = string[:bp]
|
67 |
+
q = LinkedListNode(string[bp:], False)
|
68 |
+
q.next = node.next
|
69 |
+
node.next = q
|
70 |
+
|
71 |
+
node = node.next
|
72 |
+
if node is None: break
|
73 |
+
|
74 |
+
# 屏蔽空行和太短的句子
|
75 |
+
node = root
|
76 |
+
while True:
|
77 |
+
if len(node.string.strip('\n').strip(''))==0: node.preserve = True
|
78 |
+
if len(node.string.strip('\n').strip(''))<42: node.preserve = True
|
79 |
+
node = node.next
|
80 |
+
if node is None: break
|
81 |
+
node = root
|
82 |
+
while True:
|
83 |
+
if node.next and node.preserve and node.next.preserve:
|
84 |
+
node.string += node.next.string
|
85 |
+
node.next = node.next.next
|
86 |
+
node = node.next
|
87 |
+
if node is None: break
|
88 |
+
|
89 |
+
# 将前后断行符脱离
|
90 |
+
node = root
|
91 |
+
prev_node = None
|
92 |
+
while True:
|
93 |
+
if not node.preserve:
|
94 |
+
lstriped_ = node.string.lstrip().lstrip('\n')
|
95 |
+
if (prev_node is not None) and (prev_node.preserve) and (len(lstriped_)!=len(node.string)):
|
96 |
+
prev_node.string += node.string[:-len(lstriped_)]
|
97 |
+
node.string = lstriped_
|
98 |
+
rstriped_ = node.string.rstrip().rstrip('\n')
|
99 |
+
if (node.next is not None) and (node.next.preserve) and (len(rstriped_)!=len(node.string)):
|
100 |
+
node.next.string = node.string[len(rstriped_):] + node.next.string
|
101 |
+
node.string = rstriped_
|
102 |
+
# =====
|
103 |
+
prev_node = node
|
104 |
+
node = node.next
|
105 |
+
if node is None: break
|
106 |
+
|
107 |
+
# 标注节点的行数范围
|
108 |
+
node = root
|
109 |
+
n_line = 0
|
110 |
+
expansion = 2
|
111 |
+
while True:
|
112 |
+
n_l = node.string.count('\n')
|
113 |
+
node.range = [n_line-expansion, n_line+n_l+expansion] # 失败时,扭转的范围
|
114 |
+
n_line = n_line+n_l
|
115 |
+
node = node.next
|
116 |
+
if node is None: break
|
117 |
+
return root
|
118 |
+
|
119 |
+
|
120 |
+
"""
|
121 |
+
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
122 |
+
Latex segmentation with a binary mask (PRESERVE=0, TRANSFORM=1)
|
123 |
+
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
124 |
+
"""
|
125 |
+
|
126 |
+
|
127 |
+
def set_forbidden_text(text, mask, pattern, flags=0):
|
128 |
+
"""
|
129 |
+
Add a preserve text area in this paper
|
130 |
+
e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
|
131 |
+
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
132 |
+
everything between "\begin{equation}" and "\end{equation}"
|
133 |
+
"""
|
134 |
+
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
135 |
+
pattern_compile = re.compile(pattern, flags)
|
136 |
+
for res in pattern_compile.finditer(text):
|
137 |
+
mask[res.span()[0]:res.span()[1]] = PRESERVE
|
138 |
+
return text, mask
|
139 |
+
|
140 |
+
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
141 |
+
"""
|
142 |
+
Move area out of preserve area (make text editable for GPT)
|
143 |
+
count the number of the braces so as to catch compelete text area.
|
144 |
+
e.g.
|
145 |
+
\begin{abstract} blablablablablabla. \end{abstract}
|
146 |
+
"""
|
147 |
+
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
148 |
+
pattern_compile = re.compile(pattern, flags)
|
149 |
+
for res in pattern_compile.finditer(text):
|
150 |
+
if not forbid_wrapper:
|
151 |
+
mask[res.span()[0]:res.span()[1]] = TRANSFORM
|
152 |
+
else:
|
153 |
+
mask[res.regs[0][0]: res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
|
154 |
+
mask[res.regs[1][0]: res.regs[1][1]] = TRANSFORM # abstract
|
155 |
+
mask[res.regs[1][1]: res.regs[0][1]] = PRESERVE # abstract
|
156 |
+
return text, mask
|
157 |
+
|
158 |
+
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
159 |
+
"""
|
160 |
+
Add a preserve text area in this paper (text become untouchable for GPT).
|
161 |
+
count the number of the braces so as to catch compelete text area.
|
162 |
+
e.g.
|
163 |
+
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
164 |
+
"""
|
165 |
+
pattern_compile = re.compile(pattern, flags)
|
166 |
+
for res in pattern_compile.finditer(text):
|
167 |
+
brace_level = -1
|
168 |
+
p = begin = end = res.regs[0][0]
|
169 |
+
for _ in range(1024*16):
|
170 |
+
if text[p] == '}' and brace_level == 0: break
|
171 |
+
elif text[p] == '}': brace_level -= 1
|
172 |
+
elif text[p] == '{': brace_level += 1
|
173 |
+
p += 1
|
174 |
+
end = p+1
|
175 |
+
mask[begin:end] = PRESERVE
|
176 |
+
return text, mask
|
177 |
+
|
178 |
+
def reverse_forbidden_text_careful_brace(text, mask, pattern, flags=0, forbid_wrapper=True):
|
179 |
+
"""
|
180 |
+
Move area out of preserve area (make text editable for GPT)
|
181 |
+
count the number of the braces so as to catch compelete text area.
|
182 |
+
e.g.
|
183 |
+
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
184 |
+
"""
|
185 |
+
pattern_compile = re.compile(pattern, flags)
|
186 |
+
for res in pattern_compile.finditer(text):
|
187 |
+
brace_level = 0
|
188 |
+
p = begin = end = res.regs[1][0]
|
189 |
+
for _ in range(1024*16):
|
190 |
+
if text[p] == '}' and brace_level == 0: break
|
191 |
+
elif text[p] == '}': brace_level -= 1
|
192 |
+
elif text[p] == '{': brace_level += 1
|
193 |
+
p += 1
|
194 |
+
end = p
|
195 |
+
mask[begin:end] = TRANSFORM
|
196 |
+
if forbid_wrapper:
|
197 |
+
mask[res.regs[0][0]:begin] = PRESERVE
|
198 |
+
mask[end:res.regs[0][1]] = PRESERVE
|
199 |
+
return text, mask
|
200 |
+
|
201 |
+
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
202 |
+
"""
|
203 |
+
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
204 |
+
Add it to preserve area
|
205 |
+
"""
|
206 |
+
pattern_compile = re.compile(pattern, flags)
|
207 |
+
def search_with_line_limit(text, mask):
|
208 |
+
for res in pattern_compile.finditer(text):
|
209 |
+
cmd = res.group(1) # begin{what}
|
210 |
+
this = res.group(2) # content between begin and end
|
211 |
+
this_mask = mask[res.regs[2][0]:res.regs[2][1]]
|
212 |
+
white_list = ['document', 'abstract', 'lemma', 'definition', 'sproof',
|
213 |
+
'em', 'emph', 'textit', 'textbf', 'itemize', 'enumerate']
|
214 |
+
if (cmd in white_list) or this.count('\n') >= limit_n_lines: # use a magical number 42
|
215 |
+
this, this_mask = search_with_line_limit(this, this_mask)
|
216 |
+
mask[res.regs[2][0]:res.regs[2][1]] = this_mask
|
217 |
+
else:
|
218 |
+
mask[res.regs[0][0]:res.regs[0][1]] = PRESERVE
|
219 |
+
return text, mask
|
220 |
+
return search_with_line_limit(text, mask)
|
221 |
+
|
222 |
+
|
223 |
+
|
224 |
+
"""
|
225 |
+
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
226 |
+
Latex Merge File
|
227 |
+
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
228 |
+
"""
|
229 |
+
|
230 |
+
def find_main_tex_file(file_manifest, mode):
|
231 |
+
"""
|
232 |
+
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
233 |
+
P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码)
|
234 |
+
"""
|
235 |
+
canidates = []
|
236 |
+
for texf in file_manifest:
|
237 |
+
if os.path.basename(texf).startswith('merge'):
|
238 |
+
continue
|
239 |
+
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
240 |
+
file_content = f.read()
|
241 |
+
if r'\documentclass' in file_content:
|
242 |
+
canidates.append(texf)
|
243 |
+
else:
|
244 |
+
continue
|
245 |
+
|
246 |
+
if len(canidates) == 0:
|
247 |
+
raise RuntimeError('无法找到一个主Tex文件(包含documentclass关键字)')
|
248 |
+
elif len(canidates) == 1:
|
249 |
+
return canidates[0]
|
250 |
+
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
251 |
+
canidates_score = []
|
252 |
+
# 给出一些判定模板文档的词作为扣分项
|
253 |
+
unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
|
254 |
+
expected_words = ['\input', '\ref', '\cite']
|
255 |
+
for texf in canidates:
|
256 |
+
canidates_score.append(0)
|
257 |
+
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
258 |
+
file_content = f.read()
|
259 |
+
for uw in unexpected_words:
|
260 |
+
if uw in file_content:
|
261 |
+
canidates_score[-1] -= 1
|
262 |
+
for uw in expected_words:
|
263 |
+
if uw in file_content:
|
264 |
+
canidates_score[-1] += 1
|
265 |
+
select = np.argmax(canidates_score) # 取评分最高者返回
|
266 |
+
return canidates[select]
|
267 |
+
|
268 |
+
def rm_comments(main_file):
|
269 |
+
new_file_remove_comment_lines = []
|
270 |
+
for l in main_file.splitlines():
|
271 |
+
# 删除整行���空注释
|
272 |
+
if l.lstrip().startswith("%"):
|
273 |
+
pass
|
274 |
+
else:
|
275 |
+
new_file_remove_comment_lines.append(l)
|
276 |
+
main_file = '\n'.join(new_file_remove_comment_lines)
|
277 |
+
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
278 |
+
main_file = re.sub(r'(?<!\\)%.*', '', main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
279 |
+
return main_file
|
280 |
+
|
281 |
+
def find_tex_file_ignore_case(fp):
|
282 |
+
dir_name = os.path.dirname(fp)
|
283 |
+
base_name = os.path.basename(fp)
|
284 |
+
if not base_name.endswith('.tex'): base_name+='.tex'
|
285 |
+
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
286 |
+
# go case in-sensitive
|
287 |
+
import glob
|
288 |
+
for f in glob.glob(dir_name+'/*.tex'):
|
289 |
+
base_name_s = os.path.basename(fp)
|
290 |
+
if base_name_s.lower() == base_name.lower(): return f
|
291 |
+
return None
|
292 |
+
|
293 |
+
def merge_tex_files_(project_foler, main_file, mode):
|
294 |
+
"""
|
295 |
+
Merge Tex project recrusively
|
296 |
+
"""
|
297 |
+
main_file = rm_comments(main_file)
|
298 |
+
for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]):
|
299 |
+
f = s.group(1)
|
300 |
+
fp = os.path.join(project_foler, f)
|
301 |
+
fp = find_tex_file_ignore_case(fp)
|
302 |
+
if fp:
|
303 |
+
with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()
|
304 |
+
else:
|
305 |
+
raise RuntimeError(f'找不到{fp},Tex源文件缺失!')
|
306 |
+
c = merge_tex_files_(project_foler, c, mode)
|
307 |
+
main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]
|
308 |
+
return main_file
|
309 |
+
|
310 |
+
def merge_tex_files(project_foler, main_file, mode):
|
311 |
+
"""
|
312 |
+
Merge Tex project recrusively
|
313 |
+
P.S. 顺便把CTEX塞进去以支持中文
|
314 |
+
P.S. 顺便把Latex的注释去除
|
315 |
+
"""
|
316 |
+
main_file = merge_tex_files_(project_foler, main_file, mode)
|
317 |
+
main_file = rm_comments(main_file)
|
318 |
+
|
319 |
+
if mode == 'translate_zh':
|
320 |
+
# find paper documentclass
|
321 |
+
pattern = re.compile(r'\\documentclass.*\n')
|
322 |
+
match = pattern.search(main_file)
|
323 |
+
assert match is not None, "Cannot find documentclass statement!"
|
324 |
+
position = match.end()
|
325 |
+
add_ctex = '\\usepackage{ctex}\n'
|
326 |
+
add_url = '\\usepackage{url}\n' if '{url}' not in main_file else ''
|
327 |
+
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
328 |
+
# fontset=windows
|
329 |
+
import platform
|
330 |
+
main_file = re.sub(r"\\documentclass\[(.*?)\]{(.*?)}", r"\\documentclass[\1,fontset=windows,UTF8]{\2}",main_file)
|
331 |
+
main_file = re.sub(r"\\documentclass{(.*?)}", r"\\documentclass[fontset=windows,UTF8]{\1}",main_file)
|
332 |
+
# find paper abstract
|
333 |
+
pattern_opt1 = re.compile(r'\\begin\{abstract\}.*\n')
|
334 |
+
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
335 |
+
match_opt1 = pattern_opt1.search(main_file)
|
336 |
+
match_opt2 = pattern_opt2.search(main_file)
|
337 |
+
assert (match_opt1 is not None) or (match_opt2 is not None), "Cannot find paper abstract section!"
|
338 |
+
return main_file
|
339 |
+
|
340 |
+
|
341 |
+
"""
|
342 |
+
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
343 |
+
Post process
|
344 |
+
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
345 |
+
"""
|
346 |
+
def mod_inbraket(match):
|
347 |
+
"""
|
348 |
+
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
349 |
+
"""
|
350 |
+
# get the matched string
|
351 |
+
cmd = match.group(1)
|
352 |
+
str_to_modify = match.group(2)
|
353 |
+
# modify the matched string
|
354 |
+
str_to_modify = str_to_modify.replace(':', ':') # 前面是中文冒号,后面是英文冒号
|
355 |
+
str_to_modify = str_to_modify.replace(',', ',') # 前面是中文逗号,后面是英文逗号
|
356 |
+
# str_to_modify = 'BOOM'
|
357 |
+
return "\\" + cmd + "{" + str_to_modify + "}"
|
358 |
+
|
359 |
+
def fix_content(final_tex, node_string):
|
360 |
+
"""
|
361 |
+
Fix common GPT errors to increase success rate
|
362 |
+
"""
|
363 |
+
final_tex = re.sub(r"(?<!\\)%", "\\%", final_tex)
|
364 |
+
final_tex = re.sub(r"\\([a-z]{2,10})\ \{", r"\\\1{", string=final_tex)
|
365 |
+
final_tex = re.sub(r"\\\ ([a-z]{2,10})\{", r"\\\1{", string=final_tex)
|
366 |
+
final_tex = re.sub(r"\\([a-z]{2,10})\{([^\}]*?)\}", mod_inbraket, string=final_tex)
|
367 |
+
|
368 |
+
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
369 |
+
final_tex = node_string # 出问题了,还原原文
|
370 |
+
if node_string.count('\\begin') != final_tex.count('\\begin'):
|
371 |
+
final_tex = node_string # 出问题了,还原原文
|
372 |
+
if node_string.count('\_') > 0 and node_string.count('\_') > final_tex.count('\_'):
|
373 |
+
# walk and replace any _ without \
|
374 |
+
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
375 |
+
|
376 |
+
def compute_brace_level(string):
|
377 |
+
# this function count the number of { and }
|
378 |
+
brace_level = 0
|
379 |
+
for c in string:
|
380 |
+
if c == "{": brace_level += 1
|
381 |
+
elif c == "}": brace_level -= 1
|
382 |
+
return brace_level
|
383 |
+
def join_most(tex_t, tex_o):
|
384 |
+
# this function join translated string and original string when something goes wrong
|
385 |
+
p_t = 0
|
386 |
+
p_o = 0
|
387 |
+
def find_next(string, chars, begin):
|
388 |
+
p = begin
|
389 |
+
while p < len(string):
|
390 |
+
if string[p] in chars: return p, string[p]
|
391 |
+
p += 1
|
392 |
+
return None, None
|
393 |
+
while True:
|
394 |
+
res1, char = find_next(tex_o, ['{','}'], p_o)
|
395 |
+
if res1 is None: break
|
396 |
+
res2, char = find_next(tex_t, [char], p_t)
|
397 |
+
if res2 is None: break
|
398 |
+
p_o = res1 + 1
|
399 |
+
p_t = res2 + 1
|
400 |
+
return tex_t[:p_t] + tex_o[p_o:]
|
401 |
+
|
402 |
+
if compute_brace_level(final_tex) != compute_brace_level(node_string):
|
403 |
+
# 出问题了,还原部分原文,保证括号正确
|
404 |
+
final_tex = join_most(final_tex, node_string)
|
405 |
+
return final_tex
|
406 |
+
|
407 |
+
def compile_latex_with_timeout(command, cwd, timeout=60):
|
408 |
+
import subprocess
|
409 |
+
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
|
410 |
+
try:
|
411 |
+
stdout, stderr = process.communicate(timeout=timeout)
|
412 |
+
except subprocess.TimeoutExpired:
|
413 |
+
process.kill()
|
414 |
+
stdout, stderr = process.communicate()
|
415 |
+
print("Process timed out!")
|
416 |
+
return False
|
417 |
+
return True
|
418 |
+
|
419 |
+
|
420 |
+
|
421 |
+
def merge_pdfs(pdf1_path, pdf2_path, output_path):
|
422 |
+
import PyPDF2
|
423 |
+
Percent = 0.8
|
424 |
+
# Open the first PDF file
|
425 |
+
with open(pdf1_path, 'rb') as pdf1_file:
|
426 |
+
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
427 |
+
# Open the second PDF file
|
428 |
+
with open(pdf2_path, 'rb') as pdf2_file:
|
429 |
+
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
430 |
+
# Create a new PDF file to store the merged pages
|
431 |
+
output_writer = PyPDF2.PdfFileWriter()
|
432 |
+
# Determine the number of pages in each PDF file
|
433 |
+
num_pages = max(pdf1_reader.numPages, pdf2_reader.numPages)
|
434 |
+
# Merge the pages from the two PDF files
|
435 |
+
for page_num in range(num_pages):
|
436 |
+
# Add the page from the first PDF file
|
437 |
+
if page_num < pdf1_reader.numPages:
|
438 |
+
page1 = pdf1_reader.getPage(page_num)
|
439 |
+
else:
|
440 |
+
page1 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
441 |
+
# Add the page from the second PDF file
|
442 |
+
if page_num < pdf2_reader.numPages:
|
443 |
+
page2 = pdf2_reader.getPage(page_num)
|
444 |
+
else:
|
445 |
+
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
446 |
+
# Create a new empty page with double width
|
447 |
+
new_page = PyPDF2.PageObject.createBlankPage(
|
448 |
+
width = int(int(page1.mediaBox.getWidth()) + int(page2.mediaBox.getWidth()) * Percent),
|
449 |
+
height = max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight())
|
450 |
+
)
|
451 |
+
new_page.mergeTranslatedPage(page1, 0, 0)
|
452 |
+
new_page.mergeTranslatedPage(page2, int(int(page1.mediaBox.getWidth())-int(page2.mediaBox.getWidth())* (1-Percent)), 0)
|
453 |
+
output_writer.addPage(new_page)
|
454 |
+
# Save the merged PDF file
|
455 |
+
with open(output_path, 'wb') as output_file:
|
456 |
+
output_writer.write(output_file)
|
crazy_functions/live_audio/aliyunASR.py
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import time, threading, json
|
2 |
+
|
3 |
+
|
4 |
+
class AliyunASR():
|
5 |
+
|
6 |
+
def test_on_sentence_begin(self, message, *args):
|
7 |
+
# print("test_on_sentence_begin:{}".format(message))
|
8 |
+
pass
|
9 |
+
|
10 |
+
def test_on_sentence_end(self, message, *args):
|
11 |
+
# print("test_on_sentence_end:{}".format(message))
|
12 |
+
message = json.loads(message)
|
13 |
+
self.parsed_sentence = message['payload']['result']
|
14 |
+
self.event_on_entence_end.set()
|
15 |
+
print(self.parsed_sentence)
|
16 |
+
|
17 |
+
def test_on_start(self, message, *args):
|
18 |
+
# print("test_on_start:{}".format(message))
|
19 |
+
pass
|
20 |
+
|
21 |
+
def test_on_error(self, message, *args):
|
22 |
+
print("on_error args=>{}".format(args))
|
23 |
+
pass
|
24 |
+
|
25 |
+
def test_on_close(self, *args):
|
26 |
+
self.aliyun_service_ok = False
|
27 |
+
pass
|
28 |
+
|
29 |
+
def test_on_result_chg(self, message, *args):
|
30 |
+
# print("test_on_chg:{}".format(message))
|
31 |
+
message = json.loads(message)
|
32 |
+
self.parsed_text = message['payload']['result']
|
33 |
+
self.event_on_result_chg.set()
|
34 |
+
|
35 |
+
def test_on_completed(self, message, *args):
|
36 |
+
# print("on_completed:args=>{} message=>{}".format(args, message))
|
37 |
+
pass
|
38 |
+
|
39 |
+
|
40 |
+
def audio_convertion_thread(self, uuid):
|
41 |
+
# 在一个异步线程中采集音频
|
42 |
+
import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
43 |
+
import tempfile
|
44 |
+
from scipy import io
|
45 |
+
from toolbox import get_conf
|
46 |
+
from .audio_io import change_sample_rate
|
47 |
+
from .audio_io import RealtimeAudioDistribution
|
48 |
+
NEW_SAMPLERATE = 16000
|
49 |
+
rad = RealtimeAudioDistribution()
|
50 |
+
rad.clean_up()
|
51 |
+
temp_folder = tempfile.gettempdir()
|
52 |
+
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
|
53 |
+
if len(TOKEN) == 0:
|
54 |
+
TOKEN = self.get_token()
|
55 |
+
self.aliyun_service_ok = True
|
56 |
+
URL="wss://nls-gateway.aliyuncs.com/ws/v1"
|
57 |
+
sr = nls.NlsSpeechTranscriber(
|
58 |
+
url=URL,
|
59 |
+
token=TOKEN,
|
60 |
+
appkey=APPKEY,
|
61 |
+
on_sentence_begin=self.test_on_sentence_begin,
|
62 |
+
on_sentence_end=self.test_on_sentence_end,
|
63 |
+
on_start=self.test_on_start,
|
64 |
+
on_result_changed=self.test_on_result_chg,
|
65 |
+
on_completed=self.test_on_completed,
|
66 |
+
on_error=self.test_on_error,
|
67 |
+
on_close=self.test_on_close,
|
68 |
+
callback_args=[uuid.hex]
|
69 |
+
)
|
70 |
+
|
71 |
+
r = sr.start(aformat="pcm",
|
72 |
+
enable_intermediate_result=True,
|
73 |
+
enable_punctuation_prediction=True,
|
74 |
+
enable_inverse_text_normalization=True)
|
75 |
+
|
76 |
+
while not self.stop:
|
77 |
+
# time.sleep(self.capture_interval)
|
78 |
+
audio = rad.read(uuid.hex)
|
79 |
+
if audio is not None:
|
80 |
+
# convert to pcm file
|
81 |
+
temp_file = f'{temp_folder}/{uuid.hex}.pcm' #
|
82 |
+
dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000
|
83 |
+
io.wavfile.write(temp_file, NEW_SAMPLERATE, dsdata)
|
84 |
+
# read pcm binary
|
85 |
+
with open(temp_file, "rb") as f: data = f.read()
|
86 |
+
# print('audio len:', len(audio), '\t ds len:', len(dsdata), '\t need n send:', len(data)//640)
|
87 |
+
slices = zip(*(iter(data),) * 640) # 640个字节为一组
|
88 |
+
for i in slices: sr.send_audio(bytes(i))
|
89 |
+
else:
|
90 |
+
time.sleep(0.1)
|
91 |
+
|
92 |
+
if not self.aliyun_service_ok:
|
93 |
+
self.stop = True
|
94 |
+
self.stop_msg = 'Aliyun音频服务异常,请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期。'
|
95 |
+
r = sr.stop()
|
96 |
+
|
97 |
+
def get_token(self):
|
98 |
+
from toolbox import get_conf
|
99 |
+
import json
|
100 |
+
from aliyunsdkcore.request import CommonRequest
|
101 |
+
from aliyunsdkcore.client import AcsClient
|
102 |
+
AccessKey_ID, AccessKey_secret = get_conf('ALIYUN_ACCESSKEY', 'ALIYUN_SECRET')
|
103 |
+
|
104 |
+
# 创建AcsClient实例
|
105 |
+
client = AcsClient(
|
106 |
+
AccessKey_ID,
|
107 |
+
AccessKey_secret,
|
108 |
+
"cn-shanghai"
|
109 |
+
)
|
110 |
+
|
111 |
+
# 创建request,并设置参数。
|
112 |
+
request = CommonRequest()
|
113 |
+
request.set_method('POST')
|
114 |
+
request.set_domain('nls-meta.cn-shanghai.aliyuncs.com')
|
115 |
+
request.set_version('2019-02-28')
|
116 |
+
request.set_action_name('CreateToken')
|
117 |
+
|
118 |
+
try:
|
119 |
+
response = client.do_action_with_exception(request)
|
120 |
+
print(response)
|
121 |
+
jss = json.loads(response)
|
122 |
+
if 'Token' in jss and 'Id' in jss['Token']:
|
123 |
+
token = jss['Token']['Id']
|
124 |
+
expireTime = jss['Token']['ExpireTime']
|
125 |
+
print("token = " + token)
|
126 |
+
print("expireTime = " + str(expireTime))
|
127 |
+
except Exception as e:
|
128 |
+
print(e)
|
129 |
+
|
130 |
+
return token
|
crazy_functions/live_audio/audio_io.py
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import numpy as np
|
2 |
+
from scipy import interpolate
|
3 |
+
|
4 |
+
def Singleton(cls):
|
5 |
+
_instance = {}
|
6 |
+
|
7 |
+
def _singleton(*args, **kargs):
|
8 |
+
if cls not in _instance:
|
9 |
+
_instance[cls] = cls(*args, **kargs)
|
10 |
+
return _instance[cls]
|
11 |
+
|
12 |
+
return _singleton
|
13 |
+
|
14 |
+
|
15 |
+
@Singleton
|
16 |
+
class RealtimeAudioDistribution():
|
17 |
+
def __init__(self) -> None:
|
18 |
+
self.data = {}
|
19 |
+
self.max_len = 1024*1024
|
20 |
+
self.rate = 48000 # 只读,每秒采样数量
|
21 |
+
|
22 |
+
def clean_up(self):
|
23 |
+
self.data = {}
|
24 |
+
|
25 |
+
def feed(self, uuid, audio):
|
26 |
+
self.rate, audio_ = audio
|
27 |
+
# print('feed', len(audio_), audio_[-25:])
|
28 |
+
if uuid not in self.data:
|
29 |
+
self.data[uuid] = audio_
|
30 |
+
else:
|
31 |
+
new_arr = np.concatenate((self.data[uuid], audio_))
|
32 |
+
if len(new_arr) > self.max_len: new_arr = new_arr[-self.max_len:]
|
33 |
+
self.data[uuid] = new_arr
|
34 |
+
|
35 |
+
def read(self, uuid):
|
36 |
+
if uuid in self.data:
|
37 |
+
res = self.data.pop(uuid)
|
38 |
+
print('\r read-', len(res), '-', max(res), end='', flush=True)
|
39 |
+
else:
|
40 |
+
res = None
|
41 |
+
return res
|
42 |
+
|
43 |
+
def change_sample_rate(audio, old_sr, new_sr):
|
44 |
+
duration = audio.shape[0] / old_sr
|
45 |
+
|
46 |
+
time_old = np.linspace(0, duration, audio.shape[0])
|
47 |
+
time_new = np.linspace(0, duration, int(audio.shape[0] * new_sr / old_sr))
|
48 |
+
|
49 |
+
interpolator = interpolate.interp1d(time_old, audio.T)
|
50 |
+
new_audio = interpolator(time_new).T
|
51 |
+
return new_audio.astype(np.int16)
|
crazy_functions/下载arxiv论文翻译摘要.py
CHANGED
@@ -144,11 +144,11 @@ def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, hi
|
|
144 |
|
145 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
146 |
try:
|
147 |
-
import
|
148 |
except:
|
149 |
report_execption(chatbot, history,
|
150 |
a = f"解析项目: {txt}",
|
151 |
-
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade
|
152 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
153 |
return
|
154 |
|
|
|
144 |
|
145 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
146 |
try:
|
147 |
+
import bs4
|
148 |
except:
|
149 |
report_execption(chatbot, history,
|
150 |
a = f"解析项目: {txt}",
|
151 |
+
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。")
|
152 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
153 |
return
|
154 |
|
crazy_functions/交互功能函数模板.py
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from toolbox import CatchException, update_ui
|
2 |
+
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
3 |
+
|
4 |
+
|
5 |
+
@CatchException
|
6 |
+
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
7 |
+
"""
|
8 |
+
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
9 |
+
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
10 |
+
plugin_kwargs 插件模型的参数, 如温度和top_p等, 一般原样传递下去就行
|
11 |
+
chatbot 聊天显示框的句柄,用于显示给用户
|
12 |
+
history 聊天历史,前情提要
|
13 |
+
system_prompt 给gpt的静默提醒
|
14 |
+
web_port 当前软件运行的端口号
|
15 |
+
"""
|
16 |
+
history = [] # 清空历史,以免输入溢出
|
17 |
+
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
|
18 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
19 |
+
|
20 |
+
state = chatbot._cookies.get('plugin_state_0001', None) # 初始化插件状态
|
21 |
+
|
22 |
+
if state is None:
|
23 |
+
chatbot._cookies['lock_plugin'] = 'crazy_functions.交互功能函数模板->交互功能模板函数' # 赋予插件锁定 锁定插件回调路径,当下一次用户提交时,会直接转到该函数
|
24 |
+
chatbot._cookies['plugin_state_0001'] = 'wait_user_keyword' # 赋予插件状态
|
25 |
+
|
26 |
+
chatbot.append(("第一次调用:", "请输入关键词, 我将为您查找相关壁纸, 建议使用英文单词, 插件锁定中,请直接提交即可。"))
|
27 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
28 |
+
return
|
29 |
+
|
30 |
+
if state == 'wait_user_keyword':
|
31 |
+
chatbot._cookies['lock_plugin'] = None # 解除插件锁定,避免遗忘导致死锁
|
32 |
+
chatbot._cookies['plugin_state_0001'] = None # 解除插件状态,避免遗忘导致死锁
|
33 |
+
|
34 |
+
# 解除插件锁定
|
35 |
+
chatbot.append((f"获取关键词:{txt}", ""))
|
36 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
37 |
+
page_return = get_image_page_by_keyword(txt)
|
38 |
+
inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}"
|
39 |
+
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
40 |
+
inputs=inputs, inputs_show_user=inputs_show_user,
|
41 |
+
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
42 |
+
sys_prompt="When you want to show an image, use markdown format. e.g. ![image_description](image_url). If there are no image url provided, answer 'no image url provided'"
|
43 |
+
)
|
44 |
+
chatbot[-1] = [chatbot[-1][0], gpt_say]
|
45 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
46 |
+
return
|
47 |
+
|
48 |
+
|
49 |
+
|
50 |
+
# ---------------------------------------------------------------------------------
|
51 |
+
|
52 |
+
def get_image_page_by_keyword(keyword):
|
53 |
+
import requests
|
54 |
+
from bs4 import BeautifulSoup
|
55 |
+
response = requests.get(f'https://wallhaven.cc/search?q={keyword}', timeout=2)
|
56 |
+
res = "image urls: \n"
|
57 |
+
for image_element in BeautifulSoup(response.content, 'html.parser').findAll("img"):
|
58 |
+
try:
|
59 |
+
res += image_element["data-src"]
|
60 |
+
res += "\n"
|
61 |
+
except:
|
62 |
+
pass
|
63 |
+
return res
|
crazy_functions/命令行助手.py
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from toolbox import CatchException, update_ui, gen_time_str
|
2 |
+
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
3 |
+
from .crazy_utils import input_clipping
|
4 |
+
import copy, json
|
5 |
+
|
6 |
+
@CatchException
|
7 |
+
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
8 |
+
"""
|
9 |
+
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
10 |
+
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
11 |
+
plugin_kwargs 插件模型的参数, 暂时没有用武之地
|
12 |
+
chatbot 聊天显示框的句柄, 用于显示给用户
|
13 |
+
history 聊天历史, 前情提要
|
14 |
+
system_prompt 给gpt的静默提醒
|
15 |
+
web_port 当前软件运行的端口号
|
16 |
+
"""
|
17 |
+
# 清空历史, 以免输入溢出
|
18 |
+
history = []
|
19 |
+
|
20 |
+
# 输入
|
21 |
+
i_say = "请写bash命令实现以下功能:" + txt
|
22 |
+
# 开始
|
23 |
+
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
24 |
+
inputs=i_say, inputs_show_user=txt,
|
25 |
+
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
26 |
+
sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。"
|
27 |
+
)
|
28 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
29 |
+
|
30 |
+
|
31 |
+
|
crazy_functions/图片生成.py
CHANGED
@@ -27,8 +27,10 @@ def gen_image(llm_kwargs, prompt, resolution="256x256"):
|
|
27 |
}
|
28 |
response = requests.post(url, headers=headers, json=data, proxies=proxies)
|
29 |
print(response.content)
|
30 |
-
|
31 |
-
|
|
|
|
|
32 |
# 文件保存到本地
|
33 |
r = requests.get(image_url, proxies=proxies)
|
34 |
file_path = 'gpt_log/image_gen/'
|
@@ -53,7 +55,7 @@ def 图片生成(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|
53 |
web_port 当前软件运行的端口号
|
54 |
"""
|
55 |
history = [] # 清空历史,以免输入溢出
|
56 |
-
chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt
|
57 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
58 |
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
59 |
resolution = plugin_kwargs.get("advanced_arg", '256x256')
|
|
|
27 |
}
|
28 |
response = requests.post(url, headers=headers, json=data, proxies=proxies)
|
29 |
print(response.content)
|
30 |
+
try:
|
31 |
+
image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
|
32 |
+
except:
|
33 |
+
raise RuntimeError(response.content.decode())
|
34 |
# 文件保存到本地
|
35 |
r = requests.get(image_url, proxies=proxies)
|
36 |
file_path = 'gpt_log/image_gen/'
|
|
|
55 |
web_port 当前软件运行的端口号
|
56 |
"""
|
57 |
history = [] # 清空历史,以免输入溢出
|
58 |
+
chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
59 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
60 |
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
61 |
resolution = plugin_kwargs.get("advanced_arg", '256x256')
|
crazy_functions/对话历史存档.py
CHANGED
@@ -12,7 +12,7 @@ def write_chat_to_file(chatbot, history=None, file_name=None):
|
|
12 |
file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
|
13 |
os.makedirs('./gpt_log/', exist_ok=True)
|
14 |
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
15 |
-
from theme import advanced_css
|
16 |
f.write(f'<!DOCTYPE html><head><meta charset="utf-8"><title>对话历史</title><style>{advanced_css}</style></head>')
|
17 |
for i, contents in enumerate(chatbot):
|
18 |
for j, content in enumerate(contents):
|
|
|
12 |
file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
|
13 |
os.makedirs('./gpt_log/', exist_ok=True)
|
14 |
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
15 |
+
from themes.theme import advanced_css
|
16 |
f.write(f'<!DOCTYPE html><head><meta charset="utf-8"><title>对话历史</title><style>{advanced_css}</style></head>')
|
17 |
for i, contents in enumerate(chatbot):
|
18 |
for j, content in enumerate(contents):
|
crazy_functions/总结word文档.py
CHANGED
@@ -14,17 +14,19 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
|
|
14 |
doc = Document(fp)
|
15 |
file_content = "\n".join([para.text for para in doc.paragraphs])
|
16 |
else:
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
|
|
|
|
28 |
|
29 |
print(file_content)
|
30 |
# private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
|
|
|
14 |
doc = Document(fp)
|
15 |
file_content = "\n".join([para.text for para in doc.paragraphs])
|
16 |
else:
|
17 |
+
try:
|
18 |
+
import win32com.client
|
19 |
+
word = win32com.client.Dispatch("Word.Application")
|
20 |
+
word.visible = False
|
21 |
+
# 打开文件
|
22 |
+
doc = word.Documents.Open(os.getcwd() + '/' + fp)
|
23 |
+
# file_content = doc.Content.Text
|
24 |
+
doc = word.ActiveDocument
|
25 |
+
file_content = doc.Range().Text
|
26 |
+
doc.Close()
|
27 |
+
word.Quit()
|
28 |
+
except:
|
29 |
+
raise RuntimeError('请先将.doc文档转换为.docx文档。')
|
30 |
|
31 |
print(file_content)
|
32 |
# private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
|
crazy_functions/批量Markdown翻译.py
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
-
|
2 |
-
from toolbox import
|
|
|
|
|
3 |
fast_debug = False
|
4 |
|
5 |
class PaperFileGroup():
|
@@ -42,13 +44,13 @@ class PaperFileGroup():
|
|
42 |
def write_result(self, language):
|
43 |
manifest = []
|
44 |
for path, res in zip(self.file_paths, self.file_result):
|
45 |
-
|
46 |
-
|
|
|
47 |
f.write(res)
|
48 |
return manifest
|
49 |
|
50 |
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
|
51 |
-
import time, os, re
|
52 |
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
53 |
|
54 |
# <-------- 读取Markdown文件,删除其中的所有注释 ---------->
|
@@ -102,28 +104,38 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
|
102 |
print(trimmed_format_exc())
|
103 |
|
104 |
# <-------- 整理结果,退出 ---------->
|
105 |
-
create_report_file_name =
|
106 |
-
res =
|
|
|
107 |
history = gpt_response_collection
|
108 |
chatbot.append((f"{fp}完成了吗?", res))
|
109 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
110 |
|
111 |
|
112 |
-
def get_files_from_everything(txt):
|
113 |
-
|
114 |
-
|
115 |
success = True
|
116 |
if txt.startswith('http'):
|
117 |
-
# 网络的远程文件
|
118 |
-
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
|
119 |
-
txt = txt.replace("/blob/", "/")
|
120 |
import requests
|
121 |
from toolbox import get_conf
|
122 |
proxies, = get_conf('proxies')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
r = requests.get(txt, proxies=proxies)
|
124 |
-
|
125 |
-
project_folder = '
|
126 |
-
|
|
|
127 |
elif txt.endswith('.md'):
|
128 |
# 直接给定文件
|
129 |
file_manifest = [txt]
|
@@ -145,11 +157,11 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|
145 |
"函数插件功能?",
|
146 |
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
147 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
|
148 |
|
149 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
150 |
try:
|
151 |
import tiktoken
|
152 |
-
import glob, os
|
153 |
except:
|
154 |
report_execption(chatbot, history,
|
155 |
a=f"解析项目: {txt}",
|
@@ -158,7 +170,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|
158 |
return
|
159 |
history = [] # 清空历史,以免输入溢出
|
160 |
|
161 |
-
success, file_manifest, project_folder = get_files_from_everything(txt)
|
162 |
|
163 |
if not success:
|
164 |
# 什么都没有
|
@@ -185,11 +197,11 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|
185 |
"函数插件功能?",
|
186 |
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
187 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
|
188 |
|
189 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
190 |
try:
|
191 |
import tiktoken
|
192 |
-
import glob, os
|
193 |
except:
|
194 |
report_execption(chatbot, history,
|
195 |
a=f"解析项目: {txt}",
|
@@ -218,11 +230,11 @@ def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
|
218 |
"函数插件功能?",
|
219 |
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
220 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
|
221 |
|
222 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
223 |
try:
|
224 |
import tiktoken
|
225 |
-
import glob, os
|
226 |
except:
|
227 |
report_execption(chatbot, history,
|
228 |
a=f"解析项目: {txt}",
|
|
|
1 |
+
import glob, time, os, re
|
2 |
+
from toolbox import update_ui, trimmed_format_exc, gen_time_str, disable_auto_promotion
|
3 |
+
from toolbox import CatchException, report_execption, write_history_to_file
|
4 |
+
from toolbox import promote_file_to_downloadzone, get_log_folder
|
5 |
fast_debug = False
|
6 |
|
7 |
class PaperFileGroup():
|
|
|
44 |
def write_result(self, language):
|
45 |
manifest = []
|
46 |
for path, res in zip(self.file_paths, self.file_result):
|
47 |
+
dst_file = os.path.join(get_log_folder(), f'{gen_time_str()}.md')
|
48 |
+
with open(dst_file, 'w', encoding='utf8') as f:
|
49 |
+
manifest.append(dst_file)
|
50 |
f.write(res)
|
51 |
return manifest
|
52 |
|
53 |
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
|
|
|
54 |
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
55 |
|
56 |
# <-------- 读取Markdown文件,删除其中的所有注释 ---------->
|
|
|
104 |
print(trimmed_format_exc())
|
105 |
|
106 |
# <-------- 整理结果,退出 ---------->
|
107 |
+
create_report_file_name = gen_time_str() + f"-chatgpt.md"
|
108 |
+
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
|
109 |
+
promote_file_to_downloadzone(res, chatbot=chatbot)
|
110 |
history = gpt_response_collection
|
111 |
chatbot.append((f"{fp}完成了吗?", res))
|
112 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
113 |
|
114 |
|
115 |
+
def get_files_from_everything(txt, preference=''):
|
116 |
+
if txt == "": return False, None, None
|
|
|
117 |
success = True
|
118 |
if txt.startswith('http'):
|
|
|
|
|
|
|
119 |
import requests
|
120 |
from toolbox import get_conf
|
121 |
proxies, = get_conf('proxies')
|
122 |
+
# 网络的远程文件
|
123 |
+
if preference == 'Github':
|
124 |
+
print('正在从github下载资源 ...')
|
125 |
+
if not txt.endswith('.md'):
|
126 |
+
# Make a request to the GitHub API to retrieve the repository information
|
127 |
+
url = txt.replace("https://github.com/", "https://api.github.com/repos/") + '/readme'
|
128 |
+
response = requests.get(url, proxies=proxies)
|
129 |
+
txt = response.json()['download_url']
|
130 |
+
else:
|
131 |
+
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
|
132 |
+
txt = txt.replace("/blob/", "/")
|
133 |
+
|
134 |
r = requests.get(txt, proxies=proxies)
|
135 |
+
download_local = f'{get_log_folder(plugin_name="批量Markdown翻译")}/raw-readme-{gen_time_str()}.md'
|
136 |
+
project_folder = f'{get_log_folder(plugin_name="批量Markdown翻译")}'
|
137 |
+
with open(download_local, 'wb+') as f: f.write(r.content)
|
138 |
+
file_manifest = [download_local]
|
139 |
elif txt.endswith('.md'):
|
140 |
# 直接给定文件
|
141 |
file_manifest = [txt]
|
|
|
157 |
"函数插件功能?",
|
158 |
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
159 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
160 |
+
disable_auto_promotion(chatbot)
|
161 |
|
162 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
163 |
try:
|
164 |
import tiktoken
|
|
|
165 |
except:
|
166 |
report_execption(chatbot, history,
|
167 |
a=f"解析项目: {txt}",
|
|
|
170 |
return
|
171 |
history = [] # 清空历史,以免输入溢出
|
172 |
|
173 |
+
success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github")
|
174 |
|
175 |
if not success:
|
176 |
# 什么都没有
|
|
|
197 |
"函数插件功能?",
|
198 |
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
199 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
200 |
+
disable_auto_promotion(chatbot)
|
201 |
|
202 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
203 |
try:
|
204 |
import tiktoken
|
|
|
205 |
except:
|
206 |
report_execption(chatbot, history,
|
207 |
a=f"解析项目: {txt}",
|
|
|
230 |
"函数插件功能?",
|
231 |
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
232 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
233 |
+
disable_auto_promotion(chatbot)
|
234 |
|
235 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
236 |
try:
|
237 |
import tiktoken
|
|
|
238 |
except:
|
239 |
report_execption(chatbot, history,
|
240 |
a=f"解析项目: {txt}",
|
crazy_functions/批量总结PDF文档.py
CHANGED
@@ -1,121 +1,107 @@
|
|
1 |
-
from toolbox import update_ui
|
2 |
from toolbox import CatchException, report_execption, write_results_to_file
|
3 |
-
import re
|
4 |
-
import unicodedata
|
5 |
-
fast_debug = False
|
6 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
|
|
|
|
7 |
|
8 |
-
def is_paragraph_break(match):
|
9 |
-
"""
|
10 |
-
根据给定的匹配结果来判断换行符是否表示段落分隔。
|
11 |
-
如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。
|
12 |
-
也可以根据之前的内容长度来判断段落是否已经足够长。
|
13 |
-
"""
|
14 |
-
prev_char, next_char = match.groups()
|
15 |
|
16 |
-
# 句子结束标志
|
17 |
-
sentence_endings = ".!?"
|
18 |
-
|
19 |
-
# 设定一个最小段落长度阈值
|
20 |
-
min_paragraph_length = 140
|
21 |
-
|
22 |
-
if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
|
23 |
-
return "\n\n"
|
24 |
-
else:
|
25 |
-
return " "
|
26 |
-
|
27 |
-
def normalize_text(text):
|
28 |
-
"""
|
29 |
-
通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。
|
30 |
-
例如,将连字 "fi" 转换为 "f" 和 "i"。
|
31 |
-
"""
|
32 |
-
# 对文本进行归一化处理,分解连字
|
33 |
-
normalized_text = unicodedata.normalize("NFKD", text)
|
34 |
-
|
35 |
-
# 替换其他特殊字符
|
36 |
-
cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
|
37 |
-
|
38 |
-
return cleaned_text
|
39 |
-
|
40 |
-
def clean_text(raw_text):
|
41 |
-
"""
|
42 |
-
对从 PDF 提取出的原始文本进行清洗和格式化处理。
|
43 |
-
1. 对原始文本进行归一化处理。
|
44 |
-
2. 替换跨行的连词
|
45 |
-
3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换
|
46 |
-
"""
|
47 |
-
# 对文本进行归一化处理
|
48 |
-
normalized_text = normalize_text(raw_text)
|
49 |
-
|
50 |
-
# 替换跨行的连词
|
51 |
-
text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
|
52 |
-
|
53 |
-
# 根据前后相邻字符的特点,找到原文本中的换行符
|
54 |
-
newlines = re.compile(r'(\S)\n(\S)')
|
55 |
-
|
56 |
-
# 根据 heuristic 规则,用空格或段落分隔符替换原换行符
|
57 |
-
final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
|
58 |
-
|
59 |
-
return final_text.strip()
|
60 |
|
61 |
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
105 |
-
inputs=i_say,
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
119 |
|
120 |
|
121 |
@CatchException
|
@@ -151,10 +137,7 @@ def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|
151 |
return
|
152 |
|
153 |
# 搜索需要处理的文件清单
|
154 |
-
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
|
155 |
-
# [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
|
156 |
-
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
|
157 |
-
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
|
158 |
|
159 |
# 如果没找到任何文件
|
160 |
if len(file_manifest) == 0:
|
|
|
1 |
+
from toolbox import update_ui, promote_file_to_downloadzone, gen_time_str
|
2 |
from toolbox import CatchException, report_execption, write_results_to_file
|
|
|
|
|
|
|
3 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
4 |
+
from .crazy_utils import read_and_clean_pdf_text
|
5 |
+
from .crazy_utils import input_clipping
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
10 |
+
file_write_buffer = []
|
11 |
+
for file_name in file_manifest:
|
12 |
+
print('begin analysis on:', file_name)
|
13 |
+
############################## <第 0 步,切割PDF> ##################################
|
14 |
+
# 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割)
|
15 |
+
# 的长度必须小�� 2500 个 Token
|
16 |
+
file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF
|
17 |
+
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
18 |
+
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
19 |
+
|
20 |
+
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
21 |
+
|
22 |
+
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
23 |
+
from request_llm.bridge_all import model_info
|
24 |
+
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
25 |
+
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
26 |
+
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
27 |
+
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
|
28 |
+
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
29 |
+
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
|
30 |
+
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
31 |
+
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
32 |
+
|
33 |
+
############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
|
34 |
+
final_results = []
|
35 |
+
final_results.append(paper_meta)
|
36 |
+
|
37 |
+
############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ##################################
|
38 |
+
i_say_show_user = f'首先你在中文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示
|
39 |
+
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
|
40 |
+
|
41 |
+
iteration_results = []
|
42 |
+
last_iteration_result = paper_meta # 初始值是摘要
|
43 |
+
MAX_WORD_TOTAL = 4096 * 0.7
|
44 |
+
n_fragment = len(paper_fragments)
|
45 |
+
if n_fragment >= 20: print('文章极长,不能达到预期效果')
|
46 |
+
for i in range(n_fragment):
|
47 |
+
NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
|
48 |
+
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}"
|
49 |
+
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}"
|
50 |
+
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
51 |
+
llm_kwargs, chatbot,
|
52 |
+
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
|
53 |
+
sys_prompt="Extract the main idea of this section with Chinese." # 提示
|
54 |
+
)
|
55 |
+
iteration_results.append(gpt_say)
|
56 |
+
last_iteration_result = gpt_say
|
57 |
+
|
58 |
+
############################## <第 3 步,整理history,提取总结> ##################################
|
59 |
+
final_results.extend(iteration_results)
|
60 |
+
final_results.append(f'Please conclude this paper discussed above。')
|
61 |
+
# This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py
|
62 |
+
NUM_OF_WORD = 1000
|
63 |
+
i_say = """
|
64 |
+
1. Mark the title of the paper (with Chinese translation)
|
65 |
+
2. list all the authors' names (use English)
|
66 |
+
3. mark the first author's affiliation (output Chinese translation only)
|
67 |
+
4. mark the keywords of this article (use English)
|
68 |
+
5. link to the paper, Github code link (if available, fill in Github:None if not)
|
69 |
+
6. summarize according to the following four points.Be sure to use Chinese answers (proper nouns need to be marked in English)
|
70 |
+
- (1):What is the research background of this article?
|
71 |
+
- (2):What are the past methods? What are the problems with them? Is the approach well motivated?
|
72 |
+
- (3):What is the research methodology proposed in this paper?
|
73 |
+
- (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?
|
74 |
+
Follow the format of the output that follows:
|
75 |
+
1. Title: xxx\n\n
|
76 |
+
2. Authors: xxx\n\n
|
77 |
+
3. Affiliation: xxx\n\n
|
78 |
+
4. Keywords: xxx\n\n
|
79 |
+
5. Urls: xxx or xxx , xxx \n\n
|
80 |
+
6. Summary: \n\n
|
81 |
+
- (1):xxx;\n
|
82 |
+
- (2):xxx;\n
|
83 |
+
- (3):xxx;\n
|
84 |
+
- (4):xxx.\n\n
|
85 |
+
Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible,
|
86 |
+
do not have too much repetitive information, numerical values using the original numbers.
|
87 |
+
"""
|
88 |
+
# This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py
|
89 |
+
file_write_buffer.extend(final_results)
|
90 |
+
i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000)
|
91 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
92 |
+
inputs=i_say, inputs_show_user='开始最终总结',
|
93 |
+
llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results,
|
94 |
+
sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters"
|
95 |
+
)
|
96 |
+
final_results.append(gpt_say)
|
97 |
+
file_write_buffer.extend([i_say, gpt_say])
|
98 |
+
############################## <第 4 步,设置一个token上限> ##################################
|
99 |
+
_, final_results = input_clipping("", final_results, max_token_limit=3200)
|
100 |
+
yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了
|
101 |
+
|
102 |
+
res = write_results_to_file(file_write_buffer, file_name=gen_time_str())
|
103 |
+
promote_file_to_downloadzone(res.split('\t')[-1], chatbot=chatbot)
|
104 |
+
yield from update_ui(chatbot=chatbot, history=final_results) # 刷新界面
|
|
|
105 |
|
106 |
|
107 |
@CatchException
|
|
|
137 |
return
|
138 |
|
139 |
# 搜索需要处理的文件清单
|
140 |
+
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
|
|
|
|
|
|
|
141 |
|
142 |
# 如果没找到任何文件
|
143 |
if len(file_manifest) == 0:
|
crazy_functions/批量翻译PDF文档_多线程.py
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
from toolbox import CatchException, report_execption, write_results_to_file
|
2 |
-
from toolbox import update_ui
|
3 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
4 |
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
5 |
from .crazy_utils import read_and_clean_pdf_text
|
@@ -147,23 +147,14 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
|
|
147 |
print('writing html result failed:', trimmed_format_exc())
|
148 |
|
149 |
# 准备文件的下载
|
150 |
-
import shutil
|
151 |
for pdf_path in generated_conclusion_files:
|
152 |
# 重命名文件
|
153 |
-
rename_file = f'
|
154 |
-
|
155 |
-
os.remove(rename_file)
|
156 |
-
shutil.copyfile(pdf_path, rename_file)
|
157 |
-
if os.path.exists(pdf_path):
|
158 |
-
os.remove(pdf_path)
|
159 |
for html_path in generated_html_files:
|
160 |
# 重命名文件
|
161 |
-
rename_file = f'
|
162 |
-
|
163 |
-
os.remove(rename_file)
|
164 |
-
shutil.copyfile(html_path, rename_file)
|
165 |
-
if os.path.exists(html_path):
|
166 |
-
os.remove(html_path)
|
167 |
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
|
168 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
169 |
|
|
|
1 |
from toolbox import CatchException, report_execption, write_results_to_file
|
2 |
+
from toolbox import update_ui, promote_file_to_downloadzone
|
3 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
4 |
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
5 |
from .crazy_utils import read_and_clean_pdf_text
|
|
|
147 |
print('writing html result failed:', trimmed_format_exc())
|
148 |
|
149 |
# 准备文件的下载
|
|
|
150 |
for pdf_path in generated_conclusion_files:
|
151 |
# 重命名文件
|
152 |
+
rename_file = f'翻译-{os.path.basename(pdf_path)}'
|
153 |
+
promote_file_to_downloadzone(pdf_path, rename_file=rename_file, chatbot=chatbot)
|
|
|
|
|
|
|
|
|
154 |
for html_path in generated_html_files:
|
155 |
# 重命名文件
|
156 |
+
rename_file = f'翻译-{os.path.basename(html_path)}'
|
157 |
+
promote_file_to_downloadzone(html_path, rename_file=rename_file, chatbot=chatbot)
|
|
|
|
|
|
|
|
|
158 |
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
|
159 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
160 |
|
crazy_functions/虚空终端.py
CHANGED
@@ -1,87 +1,70 @@
|
|
1 |
from toolbox import CatchException, update_ui, gen_time_str
|
2 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
3 |
from .crazy_utils import input_clipping
|
|
|
4 |
|
5 |
-
|
6 |
-
prompt = """
|
7 |
-
I have to achieve some functionalities by calling one of the functions below.
|
8 |
-
Your job is to find the correct funtion to use to satisfy my requirement,
|
9 |
-
and then write python code to call this function with correct parameters.
|
10 |
-
|
11 |
-
These are functions you are allowed to choose from:
|
12 |
-
1.
|
13 |
-
功能描述: 总结音视频内容
|
14 |
-
调用函数: ConcludeAudioContent(txt, llm_kwargs)
|
15 |
-
参数说明:
|
16 |
-
txt: 音频文件的路径
|
17 |
-
llm_kwargs: 模型参数, 永远给定None
|
18 |
-
2.
|
19 |
-
功能描述: 将每次对话记录写入Markdown格式的文件中
|
20 |
-
调用函数: WriteMarkdown()
|
21 |
-
3.
|
22 |
-
功能描述: 将指定目录下的PDF文件从英文翻译成中文
|
23 |
-
调用函数: BatchTranslatePDFDocuments_MultiThreaded(txt, llm_kwargs)
|
24 |
-
参数说明:
|
25 |
-
txt: PDF文件所在的路径
|
26 |
-
llm_kwargs: 模型参数, 永远给定None
|
27 |
-
4.
|
28 |
-
功能描述: 根据文本使用GPT模型生成相应的图像
|
29 |
-
调用函数: ImageGeneration(txt, llm_kwargs)
|
30 |
-
参数说明:
|
31 |
-
txt: 图像生成所用到的提示文本
|
32 |
-
llm_kwargs: 模型参数, 永远给定None
|
33 |
-
5.
|
34 |
-
功能描述: 对输入的word文档进行摘要生成
|
35 |
-
调用函数: SummarizingWordDocuments(input_path, output_path)
|
36 |
-
参数说明:
|
37 |
-
input_path: 待处理的word文档路径
|
38 |
-
output_path: 摘要生成后的文档路径
|
39 |
-
|
40 |
-
|
41 |
-
You should always anwser with following format:
|
42 |
-
----------------
|
43 |
-
Code:
|
44 |
-
```
|
45 |
-
class AutoAcademic(object):
|
46 |
-
def __init__(self):
|
47 |
-
self.selected_function = "FILL_CORRECT_FUNCTION_HERE" # e.g., "GenerateImage"
|
48 |
-
self.txt = "FILL_MAIN_PARAMETER_HERE" # e.g., "荷叶上的蜻蜓"
|
49 |
-
self.llm_kwargs = None
|
50 |
-
```
|
51 |
-
Explanation:
|
52 |
-
只有GenerateImage和生成图像相关, 因此选择GenerateImage函数。
|
53 |
-
----------------
|
54 |
-
|
55 |
-
Now, this is my requirement:
|
56 |
-
|
57 |
-
"""
|
58 |
def get_fn_lib():
|
59 |
return {
|
60 |
-
"BatchTranslatePDFDocuments_MultiThreaded":
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
}
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
def inspect_dependency(chatbot, history):
|
68 |
return True
|
69 |
|
70 |
def eval_code(code, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
71 |
-
import
|
72 |
-
|
73 |
-
with open('gpt_log/void_terminal_runtime.py', 'w', encoding='utf8') as f:
|
74 |
-
f.write(code)
|
75 |
-
|
76 |
try:
|
77 |
-
|
78 |
-
|
79 |
-
auto_dict = AutoAcademic()
|
80 |
-
selected_function = auto_dict.selected_function
|
81 |
-
txt = auto_dict.txt
|
82 |
-
fp, fn = get_fn_lib()[selected_function]
|
83 |
fn_plugin = getattr(importlib.import_module(fp, fn), fn)
|
84 |
-
|
|
|
85 |
except:
|
86 |
from toolbox import trimmed_format_exc
|
87 |
chatbot.append(["执行错误", f"\n```\n{trimmed_format_exc()}\n```\n"])
|
@@ -110,22 +93,27 @@ def 终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_
|
|
110 |
history = []
|
111 |
|
112 |
# 基本信息:功能、贡献者
|
113 |
-
chatbot.append(["
|
114 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
115 |
-
|
116 |
-
# # 尝试导入依赖, 如果缺少依赖, 则给出安装建议
|
117 |
-
# dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面
|
118 |
-
# if not dep_ok: return
|
119 |
|
120 |
# 输入
|
121 |
-
i_say =
|
122 |
# 开始
|
|
|
|
|
123 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
124 |
inputs=i_say, inputs_show_user=txt,
|
125 |
-
llm_kwargs=
|
126 |
-
sys_prompt=
|
127 |
)
|
128 |
|
129 |
# 将代码转为动画
|
130 |
-
|
131 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
from toolbox import CatchException, update_ui, gen_time_str
|
2 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
3 |
from .crazy_utils import input_clipping
|
4 |
+
import copy, json
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
def get_fn_lib():
|
7 |
return {
|
8 |
+
"BatchTranslatePDFDocuments_MultiThreaded": {
|
9 |
+
"module": "crazy_functions.批量翻译PDF文档_多线程",
|
10 |
+
"function": "批量翻译PDF文档",
|
11 |
+
"description": "Translate PDF Documents",
|
12 |
+
"arg_1_description": "A path containing pdf files.",
|
13 |
+
},
|
14 |
+
"SummarizingWordDocuments": {
|
15 |
+
"module": "crazy_functions.总结word文档",
|
16 |
+
"function": "总结word文档",
|
17 |
+
"description": "Summarize Word Documents",
|
18 |
+
"arg_1_description": "A path containing Word files.",
|
19 |
+
},
|
20 |
+
"ImageGeneration": {
|
21 |
+
"module": "crazy_functions.图片生成",
|
22 |
+
"function": "图片生成",
|
23 |
+
"description": "Generate a image that satisfies some description.",
|
24 |
+
"arg_1_description": "Descriptions about the image to be generated.",
|
25 |
+
},
|
26 |
+
"TranslateMarkdownFromEnglishToChinese": {
|
27 |
+
"module": "crazy_functions.批量Markdown翻译",
|
28 |
+
"function": "Markdown中译英",
|
29 |
+
"description": "Translate Markdown Documents from English to Chinese.",
|
30 |
+
"arg_1_description": "A path containing Markdown files.",
|
31 |
+
},
|
32 |
+
"SummaryAudioVideo": {
|
33 |
+
"module": "crazy_functions.总结音视频",
|
34 |
+
"function": "总结音视频",
|
35 |
+
"description": "Get text from a piece of audio and summarize this audio.",
|
36 |
+
"arg_1_description": "A path containing audio files.",
|
37 |
+
},
|
38 |
}
|
39 |
|
40 |
+
functions = [
|
41 |
+
{
|
42 |
+
"name": k,
|
43 |
+
"description": v['description'],
|
44 |
+
"parameters": {
|
45 |
+
"type": "object",
|
46 |
+
"properties": {
|
47 |
+
"plugin_arg_1": {
|
48 |
+
"type": "string",
|
49 |
+
"description": v['arg_1_description'],
|
50 |
+
},
|
51 |
+
},
|
52 |
+
"required": ["plugin_arg_1"],
|
53 |
+
},
|
54 |
+
} for k, v in get_fn_lib().items()
|
55 |
+
]
|
56 |
+
|
57 |
def inspect_dependency(chatbot, history):
|
58 |
return True
|
59 |
|
60 |
def eval_code(code, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
61 |
+
import importlib
|
|
|
|
|
|
|
|
|
62 |
try:
|
63 |
+
tmp = get_fn_lib()[code['name']]
|
64 |
+
fp, fn = tmp['module'], tmp['function']
|
|
|
|
|
|
|
|
|
65 |
fn_plugin = getattr(importlib.import_module(fp, fn), fn)
|
66 |
+
arg = json.loads(code['arguments'])['plugin_arg_1']
|
67 |
+
yield from fn_plugin(arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
68 |
except:
|
69 |
from toolbox import trimmed_format_exc
|
70 |
chatbot.append(["执行错误", f"\n```\n{trimmed_format_exc()}\n```\n"])
|
|
|
93 |
history = []
|
94 |
|
95 |
# 基本信息:功能、贡献者
|
96 |
+
chatbot.append(["虚空终端插件的功能?", "根据自然语言的描述, 执行任意插件的命令."])
|
97 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
|
|
|
|
|
|
|
98 |
|
99 |
# 输入
|
100 |
+
i_say = txt
|
101 |
# 开始
|
102 |
+
llm_kwargs_function_call = copy.deepcopy(llm_kwargs)
|
103 |
+
llm_kwargs_function_call['llm_model'] = 'gpt-call-fn' # 修改调用函数
|
104 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
105 |
inputs=i_say, inputs_show_user=txt,
|
106 |
+
llm_kwargs=llm_kwargs_function_call, chatbot=chatbot, history=[],
|
107 |
+
sys_prompt=functions
|
108 |
)
|
109 |
|
110 |
# 将代码转为动画
|
111 |
+
res = json.loads(gpt_say)['choices'][0]
|
112 |
+
if res['finish_reason'] == 'function_call':
|
113 |
+
code = json.loads(gpt_say)['choices'][0]
|
114 |
+
yield from eval_code(code['message']['function_call'], llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
115 |
+
else:
|
116 |
+
chatbot.append(["无法调用相关功能", res])
|
117 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
118 |
+
|
119 |
+
|
crazy_functions/询问多个大语言模型.py
CHANGED
@@ -6,7 +6,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|
6 |
"""
|
7 |
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
8 |
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
9 |
-
plugin_kwargs
|
10 |
chatbot 聊天显示框的句柄,用于显示给用户
|
11 |
history 聊天历史,前情提要
|
12 |
system_prompt 给gpt的静默提醒
|
@@ -35,19 +35,21 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
|
35 |
"""
|
36 |
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
37 |
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
38 |
-
plugin_kwargs
|
39 |
chatbot 聊天显示框的句柄,用于显示给用户
|
40 |
history 聊天历史,前情提要
|
41 |
system_prompt 给gpt的静默提醒
|
42 |
web_port 当前软件运行的端口号
|
43 |
"""
|
44 |
history = [] # 清空历史,以免输入溢出
|
45 |
-
chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……"))
|
46 |
-
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
47 |
|
48 |
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
49 |
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
50 |
llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
|
|
|
|
|
|
|
|
51 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
52 |
inputs=txt, inputs_show_user=txt,
|
53 |
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
|
|
6 |
"""
|
7 |
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
8 |
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
9 |
+
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
10 |
chatbot 聊天显示框的句柄,用于显示给用户
|
11 |
history 聊天历史,前情提要
|
12 |
system_prompt 给gpt的静默提醒
|
|
|
35 |
"""
|
36 |
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
37 |
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
38 |
+
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
39 |
chatbot 聊天显示框的句柄,用于显示给用户
|
40 |
history 聊天历史,前情提要
|
41 |
system_prompt 给gpt的静默提醒
|
42 |
web_port 当前软件运行的端口号
|
43 |
"""
|
44 |
history = [] # 清空历史,以免输入溢出
|
|
|
|
|
45 |
|
46 |
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
47 |
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
48 |
llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
49 |
+
|
50 |
+
chatbot.append((txt, f"正在同时咨询{llm_kwargs['llm_model']}"))
|
51 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
52 |
+
|
53 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
54 |
inputs=txt, inputs_show_user=txt,
|
55 |
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
crazy_functions/语音助手.py
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from toolbox import update_ui
|
2 |
+
from toolbox import CatchException, get_conf, markdown_convertion
|
3 |
+
from crazy_functions.crazy_utils import input_clipping
|
4 |
+
from request_llm.bridge_all import predict_no_ui_long_connection
|
5 |
+
import threading, time
|
6 |
+
import numpy as np
|
7 |
+
from .live_audio.aliyunASR import AliyunASR
|
8 |
+
import json
|
9 |
+
|
10 |
+
class WatchDog():
|
11 |
+
def __init__(self, timeout, bark_fn, interval=3, msg="") -> None:
|
12 |
+
self.last_feed = None
|
13 |
+
self.timeout = timeout
|
14 |
+
self.bark_fn = bark_fn
|
15 |
+
self.interval = interval
|
16 |
+
self.msg = msg
|
17 |
+
self.kill_dog = False
|
18 |
+
|
19 |
+
def watch(self):
|
20 |
+
while True:
|
21 |
+
if self.kill_dog: break
|
22 |
+
if time.time() - self.last_feed > self.timeout:
|
23 |
+
if len(self.msg) > 0: print(self.msg)
|
24 |
+
self.bark_fn()
|
25 |
+
break
|
26 |
+
time.sleep(self.interval)
|
27 |
+
|
28 |
+
def begin_watch(self):
|
29 |
+
self.last_feed = time.time()
|
30 |
+
th = threading.Thread(target=self.watch)
|
31 |
+
th.daemon = True
|
32 |
+
th.start()
|
33 |
+
|
34 |
+
def feed(self):
|
35 |
+
self.last_feed = time.time()
|
36 |
+
|
37 |
+
def chatbot2history(chatbot):
|
38 |
+
history = []
|
39 |
+
for c in chatbot:
|
40 |
+
for q in c:
|
41 |
+
if q not in ["[请讲话]", "[等待GPT响应]", "[正在等您说完问题]"]:
|
42 |
+
history.append(q.strip('<div class="markdown-body">').strip('</div>').strip('<p>').strip('</p>'))
|
43 |
+
return history
|
44 |
+
|
45 |
+
class AsyncGptTask():
|
46 |
+
def __init__(self) -> None:
|
47 |
+
self.observe_future = []
|
48 |
+
self.observe_future_chatbot_index = []
|
49 |
+
|
50 |
+
def gpt_thread_worker(self, i_say, llm_kwargs, history, sys_prompt, observe_window, index):
|
51 |
+
try:
|
52 |
+
MAX_TOKEN_ALLO = 2560
|
53 |
+
i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO)
|
54 |
+
gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt,
|
55 |
+
observe_window=observe_window[index], console_slience=True)
|
56 |
+
except ConnectionAbortedError as token_exceed_err:
|
57 |
+
print('至少一个线程任务Token溢出而失败', e)
|
58 |
+
except Exception as e:
|
59 |
+
print('至少一个线程任务意外失败', e)
|
60 |
+
|
61 |
+
def add_async_gpt_task(self, i_say, chatbot_index, llm_kwargs, history, system_prompt):
|
62 |
+
self.observe_future.append([""])
|
63 |
+
self.observe_future_chatbot_index.append(chatbot_index)
|
64 |
+
cur_index = len(self.observe_future)-1
|
65 |
+
th_new = threading.Thread(target=self.gpt_thread_worker, args=(i_say, llm_kwargs, history, system_prompt, self.observe_future, cur_index))
|
66 |
+
th_new.daemon = True
|
67 |
+
th_new.start()
|
68 |
+
|
69 |
+
def update_chatbot(self, chatbot):
|
70 |
+
for of, ofci in zip(self.observe_future, self.observe_future_chatbot_index):
|
71 |
+
try:
|
72 |
+
chatbot[ofci] = list(chatbot[ofci])
|
73 |
+
chatbot[ofci][1] = markdown_convertion(of[0])
|
74 |
+
except:
|
75 |
+
self.observe_future = []
|
76 |
+
self.observe_future_chatbot_index = []
|
77 |
+
return chatbot
|
78 |
+
|
79 |
+
class InterviewAssistant(AliyunASR):
|
80 |
+
def __init__(self):
|
81 |
+
self.capture_interval = 0.5 # second
|
82 |
+
self.stop = False
|
83 |
+
self.parsed_text = ""
|
84 |
+
self.parsed_sentence = ""
|
85 |
+
self.buffered_sentence = ""
|
86 |
+
self.event_on_result_chg = threading.Event()
|
87 |
+
self.event_on_entence_end = threading.Event()
|
88 |
+
self.event_on_commit_question = threading.Event()
|
89 |
+
|
90 |
+
def __del__(self):
|
91 |
+
self.stop = True
|
92 |
+
self.stop_msg = ""
|
93 |
+
self.commit_wd.kill_dog = True
|
94 |
+
self.plugin_wd.kill_dog = True
|
95 |
+
|
96 |
+
def init(self, chatbot):
|
97 |
+
# 初始化音频采集线程
|
98 |
+
self.captured_audio = np.array([])
|
99 |
+
self.keep_latest_n_second = 10
|
100 |
+
self.commit_after_pause_n_second = 2.0
|
101 |
+
self.ready_audio_flagment = None
|
102 |
+
self.stop = False
|
103 |
+
self.plugin_wd = WatchDog(timeout=5, bark_fn=self.__del__, msg="程序终止")
|
104 |
+
self.aut = threading.Thread(target=self.audio_convertion_thread, args=(chatbot._cookies['uuid'],))
|
105 |
+
self.aut.daemon = True
|
106 |
+
self.aut.start()
|
107 |
+
# th2 = threading.Thread(target=self.audio2txt_thread, args=(chatbot._cookies['uuid'],))
|
108 |
+
# th2.daemon = True
|
109 |
+
# th2.start()
|
110 |
+
|
111 |
+
def no_audio_for_a_while(self):
|
112 |
+
if len(self.buffered_sentence) < 7: # 如果一句话小于7个字,暂不提交
|
113 |
+
self.commit_wd.begin_watch()
|
114 |
+
else:
|
115 |
+
self.event_on_commit_question.set()
|
116 |
+
|
117 |
+
def begin(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
118 |
+
# main plugin function
|
119 |
+
self.init(chatbot)
|
120 |
+
chatbot.append(["[请讲话]", "[正在等您说完问题]"])
|
121 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
122 |
+
self.plugin_wd.begin_watch()
|
123 |
+
self.agt = AsyncGptTask()
|
124 |
+
self.commit_wd = WatchDog(timeout=self.commit_after_pause_n_second, bark_fn=self.no_audio_for_a_while, interval=0.2)
|
125 |
+
self.commit_wd.begin_watch()
|
126 |
+
|
127 |
+
while not self.stop:
|
128 |
+
self.event_on_result_chg.wait(timeout=0.25) # run once every 0.25 second
|
129 |
+
chatbot = self.agt.update_chatbot(chatbot) # 将子线程的gpt结果写入chatbot
|
130 |
+
history = chatbot2history(chatbot)
|
131 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
132 |
+
self.plugin_wd.feed()
|
133 |
+
|
134 |
+
if self.event_on_result_chg.is_set():
|
135 |
+
# update audio decode result
|
136 |
+
self.event_on_result_chg.clear()
|
137 |
+
chatbot[-1] = list(chatbot[-1])
|
138 |
+
chatbot[-1][0] = self.buffered_sentence + self.parsed_text
|
139 |
+
history = chatbot2history(chatbot)
|
140 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
141 |
+
self.commit_wd.feed()
|
142 |
+
|
143 |
+
if self.event_on_entence_end.is_set():
|
144 |
+
# called when a sentence has ended
|
145 |
+
self.event_on_entence_end.clear()
|
146 |
+
self.parsed_text = self.parsed_sentence
|
147 |
+
self.buffered_sentence += self.parsed_sentence
|
148 |
+
|
149 |
+
if self.event_on_commit_question.is_set():
|
150 |
+
# called when a question should be commited
|
151 |
+
self.event_on_commit_question.clear()
|
152 |
+
if len(self.buffered_sentence) == 0: raise RuntimeError
|
153 |
+
|
154 |
+
self.commit_wd.begin_watch()
|
155 |
+
chatbot[-1] = list(chatbot[-1])
|
156 |
+
chatbot[-1] = [self.buffered_sentence, "[等待GPT响应]"]
|
157 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
158 |
+
# add gpt task 创建子线程请求gpt,避免线程阻塞
|
159 |
+
history = chatbot2history(chatbot)
|
160 |
+
self.agt.add_async_gpt_task(self.buffered_sentence, len(chatbot)-1, llm_kwargs, history, system_prompt)
|
161 |
+
|
162 |
+
self.buffered_sentence = ""
|
163 |
+
chatbot.append(["[请讲话]", "[正在等您说完问题]"])
|
164 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
165 |
+
|
166 |
+
if len(self.stop_msg) != 0:
|
167 |
+
raise RuntimeError(self.stop_msg)
|
168 |
+
|
169 |
+
|
170 |
+
|
171 |
+
@CatchException
|
172 |
+
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
173 |
+
# pip install -U openai-whisper
|
174 |
+
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
|
175 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
176 |
+
|
177 |
+
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
178 |
+
try:
|
179 |
+
import nls
|
180 |
+
from scipy import io
|
181 |
+
except:
|
182 |
+
chatbot.append(["导入依赖失败", "使用该模块需要额外依赖, 安装方法:```pip install --upgrade aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git```"])
|
183 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
184 |
+
return
|
185 |
+
|
186 |
+
APPKEY = get_conf('ALIYUN_APPKEY')
|
187 |
+
if APPKEY == "":
|
188 |
+
chatbot.append(["导入依赖失败", "没有阿里云语音识别APPKEY和TOKEN, 详情见https://help.aliyun.com/document_detail/450255.html"])
|
189 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
190 |
+
return
|
191 |
+
|
192 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
193 |
+
ia = InterviewAssistant()
|
194 |
+
yield from ia.begin(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
195 |
+
|
crazy_functions/谷歌检索小助手.py
CHANGED
@@ -104,7 +104,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|
104 |
meta_paper_info_list = meta_paper_info_list[batchsize:]
|
105 |
|
106 |
chatbot.append(["状态?",
|
107 |
-
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write
|
108 |
msg = '正常'
|
109 |
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
110 |
res = write_results_to_file(history)
|
|
|
104 |
meta_paper_info_list = meta_paper_info_list[batchsize:]
|
105 |
|
106 |
chatbot.append(["状态?",
|
107 |
+
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
|
108 |
msg = '正常'
|
109 |
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
110 |
res = write_results_to_file(history)
|
crazy_functions/辅助回答.py
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# encoding: utf-8
|
2 |
+
# @Time : 2023/4/19
|
3 |
+
# @Author : Spike
|
4 |
+
# @Descr :
|
5 |
+
from toolbox import update_ui
|
6 |
+
from toolbox import CatchException, report_execption, write_results_to_file
|
7 |
+
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
8 |
+
|
9 |
+
|
10 |
+
@CatchException
|
11 |
+
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
12 |
+
if txt:
|
13 |
+
show_say = txt
|
14 |
+
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
|
15 |
+
else:
|
16 |
+
prompt = history[-1]+"\n分析上述回答,再列出用户可能提出的三个问题。"
|
17 |
+
show_say = '分析上述回答,再列出用户可能提出的三个问题。'
|
18 |
+
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
19 |
+
inputs=prompt,
|
20 |
+
inputs_show_user=show_say,
|
21 |
+
llm_kwargs=llm_kwargs,
|
22 |
+
chatbot=chatbot,
|
23 |
+
history=history,
|
24 |
+
sys_prompt=system_prompt
|
25 |
+
)
|
26 |
+
chatbot[-1] = (show_say, gpt_say)
|
27 |
+
history.extend([show_say, gpt_say])
|
28 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
crazy_functions/高级功能函数模板.py
CHANGED
@@ -1,13 +1,12 @@
|
|
1 |
from toolbox import CatchException, update_ui
|
2 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
3 |
-
import datetime
|
4 |
-
|
5 |
@CatchException
|
6 |
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
7 |
"""
|
8 |
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
9 |
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
10 |
-
plugin_kwargs
|
11 |
chatbot 聊天显示框的句柄,用于显示给用户
|
12 |
history 聊天历史,前情提要
|
13 |
system_prompt 给gpt的静默提醒
|
@@ -19,34 +18,12 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|
19 |
for i in range(5):
|
20 |
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
|
21 |
currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day
|
22 |
-
i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}
|
23 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
24 |
inputs=i_say, inputs_show_user=i_say,
|
25 |
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
26 |
-
sys_prompt=
|
27 |
)
|
28 |
-
gpt_say = get_images(gpt_say)
|
29 |
chatbot[-1] = (i_say, gpt_say)
|
30 |
history.append(i_say);history.append(gpt_say)
|
31 |
-
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
32 |
-
|
33 |
-
|
34 |
-
def get_images(gpt_say):
|
35 |
-
def get_image_by_keyword(keyword):
|
36 |
-
import requests
|
37 |
-
from bs4 import BeautifulSoup
|
38 |
-
response = requests.get(f'https://wallhaven.cc/search?q={keyword}', timeout=2)
|
39 |
-
for image_element in BeautifulSoup(response.content, 'html.parser').findAll("img"):
|
40 |
-
if "data-src" in image_element: break
|
41 |
-
return image_element["data-src"]
|
42 |
-
|
43 |
-
for keywords in re.findall('{"KeyWords":\[(.*?)\]}', gpt_say):
|
44 |
-
keywords = [n.strip('"') for n in keywords.split(',')]
|
45 |
-
try:
|
46 |
-
description = keywords[0]
|
47 |
-
url = get_image_by_keyword(keywords[0])
|
48 |
-
img_tag = f"\n\n![{description}]({url})"
|
49 |
-
gpt_say += img_tag
|
50 |
-
except:
|
51 |
-
continue
|
52 |
-
return gpt_say
|
|
|
1 |
from toolbox import CatchException, update_ui
|
2 |
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
3 |
+
import datetime
|
|
|
4 |
@CatchException
|
5 |
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
6 |
"""
|
7 |
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
8 |
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
9 |
+
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
10 |
chatbot 聊天显示框的句柄,用于显示给用户
|
11 |
history 聊天历史,前情提要
|
12 |
system_prompt 给gpt的静默提醒
|
|
|
18 |
for i in range(5):
|
19 |
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
|
20 |
currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day
|
21 |
+
i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。'
|
22 |
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
23 |
inputs=i_say, inputs_show_user=i_say,
|
24 |
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
25 |
+
sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。"
|
26 |
)
|
|
|
27 |
chatbot[-1] = (i_say, gpt_say)
|
28 |
history.append(i_say);history.append(gpt_say)
|
29 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docker-compose.yml
CHANGED
@@ -6,7 +6,7 @@
|
|
6 |
version: '3'
|
7 |
services:
|
8 |
gpt_academic_nolocalllms:
|
9 |
-
image: ghcr.io/binary-husky/gpt_academic_nolocal:master
|
10 |
environment:
|
11 |
# 请查阅 `config.py` 以查看所有的配置信息
|
12 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
@@ -33,7 +33,7 @@ services:
|
|
33 |
version: '3'
|
34 |
services:
|
35 |
gpt_academic_with_chatglm:
|
36 |
-
image: ghcr.io/binary-husky/gpt_academic_chatglm_moss:master
|
37 |
environment:
|
38 |
# 请查阅 `config.py` 以查看所有的配置信息
|
39 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
@@ -63,7 +63,7 @@ services:
|
|
63 |
version: '3'
|
64 |
services:
|
65 |
gpt_academic_with_rwkv:
|
66 |
-
image:
|
67 |
environment:
|
68 |
# 请查阅 `config.py` 以查看所有的配置信息
|
69 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
@@ -85,33 +85,18 @@ services:
|
|
85 |
# 与宿主的网络融合
|
86 |
network_mode: "host"
|
87 |
|
88 |
-
# 使用代理网络拉取最新代码
|
89 |
-
# command: >
|
90 |
-
# bash -c " truncate -s -1 /etc/proxychains.conf &&
|
91 |
-
# echo \"socks5 127.0.0.1 10880\" >> /etc/proxychains.conf &&
|
92 |
-
# echo '[gpt-academic] 正在从github拉取最新代码...' &&
|
93 |
-
# proxychains git pull &&
|
94 |
-
# echo '[jittorllms] 正在从github拉取最新代码...' &&
|
95 |
-
# proxychains git --git-dir=request_llm/jittorllms/.git --work-tree=request_llm/jittorllms pull --force &&
|
96 |
-
# python3 -u main.py"
|
97 |
-
|
98 |
# 不使用代理网络拉取最新代码
|
99 |
command: >
|
100 |
-
|
101 |
-
git pull &&
|
102 |
-
pip install -r requirements.txt &&
|
103 |
-
echo '[jittorllms] 正在从github拉取最新代码...' &&
|
104 |
-
git --git-dir=request_llm/jittorllms/.git --work-tree=request_llm/jittorllms pull --force &&
|
105 |
-
python3 -u main.py"
|
106 |
|
107 |
|
108 |
## ===================================================
|
109 |
-
## 【方案四】
|
110 |
## ===================================================
|
111 |
version: '3'
|
112 |
services:
|
113 |
gpt_academic_with_latex:
|
114 |
-
image: ghcr.io/binary-husky/gpt_academic_with_latex:master
|
115 |
environment:
|
116 |
# 请查阅 `config.py` 以查看所有的配置信息
|
117 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
@@ -130,3 +115,36 @@ services:
|
|
130 |
command: >
|
131 |
bash -c "python3 -u main.py"
|
132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
version: '3'
|
7 |
services:
|
8 |
gpt_academic_nolocalllms:
|
9 |
+
image: ghcr.io/binary-husky/gpt_academic_nolocal:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal)
|
10 |
environment:
|
11 |
# 请查阅 `config.py` 以查看所有的配置信息
|
12 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
|
|
33 |
version: '3'
|
34 |
services:
|
35 |
gpt_academic_with_chatglm:
|
36 |
+
image: ghcr.io/binary-husky/gpt_academic_chatglm_moss:master # (Auto Built by Dockerfile: docs/Dockerfile+ChatGLM)
|
37 |
environment:
|
38 |
# 请查阅 `config.py` 以查看所有的配置信息
|
39 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
|
|
63 |
version: '3'
|
64 |
services:
|
65 |
gpt_academic_with_rwkv:
|
66 |
+
image: ghcr.io/binary-husky/gpt_academic_jittorllms:master
|
67 |
environment:
|
68 |
# 请查阅 `config.py` 以查看所有的配置信息
|
69 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
|
|
85 |
# 与宿主的网络融合
|
86 |
network_mode: "host"
|
87 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
# 不使用代理网络拉取最新代码
|
89 |
command: >
|
90 |
+
python3 -u main.py
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
|
93 |
## ===================================================
|
94 |
+
## 【方案四】 ChatGPT + Latex
|
95 |
## ===================================================
|
96 |
version: '3'
|
97 |
services:
|
98 |
gpt_academic_with_latex:
|
99 |
+
image: ghcr.io/binary-husky/gpt_academic_with_latex:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal+Latex)
|
100 |
environment:
|
101 |
# 请查阅 `config.py` 以查看所有的配置信息
|
102 |
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
|
|
115 |
command: >
|
116 |
bash -c "python3 -u main.py"
|
117 |
|
118 |
+
|
119 |
+
## ===================================================
|
120 |
+
## 【方案五】 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md)
|
121 |
+
## ===================================================
|
122 |
+
version: '3'
|
123 |
+
services:
|
124 |
+
gpt_academic_with_audio:
|
125 |
+
image: ghcr.io/binary-husky/gpt_academic_audio_assistant:master
|
126 |
+
environment:
|
127 |
+
# 请查阅 `config.py` 以查看所有的配置信息
|
128 |
+
API_KEY: ' fk195831-IdP0Pb3W6DCMUIbQwVX6MsSiyxwqybyS '
|
129 |
+
USE_PROXY: ' False '
|
130 |
+
proxies: ' None '
|
131 |
+
LLM_MODEL: ' gpt-3.5-turbo '
|
132 |
+
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "gpt-4"] '
|
133 |
+
ENABLE_AUDIO: ' True '
|
134 |
+
LOCAL_MODEL_DEVICE: ' cuda '
|
135 |
+
DEFAULT_WORKER_NUM: ' 20 '
|
136 |
+
WEB_PORT: ' 12343 '
|
137 |
+
ADD_WAIFU: ' True '
|
138 |
+
THEME: ' Chuanhu-Small-and-Beautiful '
|
139 |
+
ALIYUN_APPKEY: ' RoP1ZrM84DnAFkZK '
|
140 |
+
ALIYUN_TOKEN: ' f37f30e0f9934c34a992f6f64f7eba4f '
|
141 |
+
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
|
142 |
+
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
|
143 |
+
|
144 |
+
# 与宿主的网络融合
|
145 |
+
network_mode: "host"
|
146 |
+
|
147 |
+
# 不使用代理网络拉取最新代码
|
148 |
+
command: >
|
149 |
+
bash -c "python3 -u main.py"
|
150 |
+
|
docs/Dockerfile+ChatGLM
CHANGED
@@ -26,8 +26,8 @@ RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
|
26 |
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
27 |
# 下载分支
|
28 |
WORKDIR /gpt
|
29 |
-
RUN $useProxyNetwork git clone https://github.com/binary-husky/
|
30 |
-
WORKDIR /gpt/
|
31 |
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
|
32 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
|
33 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
|
|
|
26 |
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
27 |
# 下载分支
|
28 |
WORKDIR /gpt
|
29 |
+
RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
|
30 |
+
WORKDIR /gpt/gpt_academic
|
31 |
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
|
32 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
|
33 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
|
docs/Dockerfile+JittorLLM
CHANGED
@@ -26,8 +26,8 @@ RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
|
26 |
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
27 |
# 下载分支
|
28 |
WORKDIR /gpt
|
29 |
-
RUN $useProxyNetwork git clone https://github.com/binary-husky/
|
30 |
-
WORKDIR /gpt/
|
31 |
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
|
32 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
|
33 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
|
|
|
26 |
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
27 |
# 下载分支
|
28 |
WORKDIR /gpt
|
29 |
+
RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
|
30 |
+
WORKDIR /gpt/gpt_academic
|
31 |
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
|
32 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
|
33 |
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
|
docs/GithubAction+ChatGLM+Moss
CHANGED
@@ -13,8 +13,8 @@ RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
|
13 |
RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
14 |
# 下载分支
|
15 |
WORKDIR /gpt
|
16 |
-
RUN git clone https://github.com/binary-husky/
|
17 |
-
WORKDIR /gpt/
|
18 |
RUN git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss
|
19 |
RUN python3 -m pip install -r requirements.txt
|
20 |
RUN python3 -m pip install -r request_llm/requirements_moss.txt
|
|
|
13 |
RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
14 |
# 下载分支
|
15 |
WORKDIR /gpt
|
16 |
+
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
17 |
+
WORKDIR /gpt/gpt_academic
|
18 |
RUN git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss
|
19 |
RUN python3 -m pip install -r requirements.txt
|
20 |
RUN python3 -m pip install -r request_llm/requirements_moss.txt
|
docs/GithubAction+JittorLLMs
CHANGED
@@ -13,8 +13,8 @@ RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/
|
|
13 |
|
14 |
# 下载分支
|
15 |
WORKDIR /gpt
|
16 |
-
RUN git clone https://github.com/binary-husky/
|
17 |
-
WORKDIR /gpt/
|
18 |
RUN python3 -m pip install -r requirements.txt
|
19 |
RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
|
20 |
RUN python3 -m pip install -r request_llm/requirements_newbing.txt
|
|
|
13 |
|
14 |
# 下载分支
|
15 |
WORKDIR /gpt
|
16 |
+
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
17 |
+
WORKDIR /gpt/gpt_academic
|
18 |
RUN python3 -m pip install -r requirements.txt
|
19 |
RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
|
20 |
RUN python3 -m pip install -r request_llm/requirements_newbing.txt
|
docs/GithubAction+NoLocal+AudioAssistant
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
2 |
+
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic-nolocal -f docs/Dockerfile+NoLocal .
|
3 |
+
# 如何运行: docker run --rm -it --net=host gpt-academic-nolocal
|
4 |
+
FROM python:3.11
|
5 |
+
|
6 |
+
# 指定路径
|
7 |
+
WORKDIR /gpt
|
8 |
+
|
9 |
+
# 装载项目文件
|
10 |
+
COPY . .
|
11 |
+
|
12 |
+
# 安装依赖
|
13 |
+
RUN pip3 install -r requirements.txt
|
14 |
+
|
15 |
+
# 安装语音插件的额外依赖
|
16 |
+
RUN pip3 install pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
17 |
+
|
18 |
+
# 可选步骤,用于预热模块
|
19 |
+
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
20 |
+
|
21 |
+
# 启动
|
22 |
+
CMD ["python3", "-u", "main.py"]
|
docs/README.md.German.md
CHANGED
@@ -15,7 +15,7 @@ Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `m
|
|
15 |
>
|
16 |
> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie.
|
17 |
>
|
18 |
-
> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/
|
19 |
>
|
20 |
> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung
|
21 |
--- | ---
|
@@ -23,13 +23,13 @@ Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach gr
|
|
23 |
Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung
|
24 |
Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu
|
25 |
[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen
|
26 |
-
Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/
|
27 |
-
[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/
|
28 |
[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte
|
29 |
Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung
|
30 |
LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels
|
31 |
Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren
|
32 |
-
Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/
|
33 |
Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung
|
34 |
[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads)
|
35 |
[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download
|
@@ -37,7 +37,7 @@ Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfa
|
|
37 |
Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten
|
38 |
Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights
|
39 |
Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/)
|
40 |
-
Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/
|
41 |
[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder?
|
42 |
Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/)
|
43 |
Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments ……
|
@@ -76,8 +76,8 @@ Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokumen
|
|
76 |
|
77 |
1. Download the project
|
78 |
```sh
|
79 |
-
git clone https://github.com/binary-husky/
|
80 |
-
cd
|
81 |
```
|
82 |
|
83 |
2. Configure API_KEY
|
@@ -133,8 +133,8 @@ python main.py
|
|
133 |
1. Only ChatGPT (Recommended for most people)
|
134 |
|
135 |
``` sh
|
136 |
-
git clone https://github.com/binary-husky/
|
137 |
-
cd
|
138 |
nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc.
|
139 |
docker build -t gpt-academic . # Install
|
140 |
|
@@ -164,10 +164,10 @@ docker-compose up
|
|
164 |
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
165 |
|
166 |
2. Remote cloud server deployment (requires cloud server knowledge and experience)
|
167 |
-
Please visit [Deployment wiki-1](https://github.com/binary-husky/
|
168 |
|
169 |
3. Using WSL 2 (Windows subsystem for Linux)
|
170 |
-
Please visit [Deployment wiki-2](https://github.com/binary-husky/
|
171 |
|
172 |
4. How to run at a secondary URL (such as `http://localhost/subpath`)
|
173 |
Please visit [FastAPI operating instructions](docs/WithFastapi.md)
|
@@ -199,7 +199,7 @@ For example
|
|
199 |
|
200 |
Write powerful function plugins to perform any task you want and can't think of.
|
201 |
The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided.
|
202 |
-
For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/
|
203 |
|
204 |
---
|
205 |
# Latest Update
|
|
|
15 |
>
|
16 |
> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie.
|
17 |
>
|
18 |
+
> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation).
|
19 |
>
|
20 |
> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung
|
21 |
--- | ---
|
|
|
23 |
Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung
|
24 |
Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu
|
25 |
[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen
|
26 |
+
Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
27 |
+
[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts
|
28 |
[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte
|
29 |
Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung
|
30 |
LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels
|
31 |
Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren
|
32 |
+
Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen?
|
33 |
Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung
|
34 |
[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads)
|
35 |
[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download
|
|
|
37 |
Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten
|
38 |
Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights
|
39 |
Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/)
|
40 |
+
Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/gpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren
|
41 |
[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder?
|
42 |
Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/)
|
43 |
Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments ……
|
|
|
76 |
|
77 |
1. Download the project
|
78 |
```sh
|
79 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
80 |
+
cd gpt_academic
|
81 |
```
|
82 |
|
83 |
2. Configure API_KEY
|
|
|
133 |
1. Only ChatGPT (Recommended for most people)
|
134 |
|
135 |
``` sh
|
136 |
+
git clone https://github.com/binary-husky/gpt_academic.git # Download the project
|
137 |
+
cd gpt_academic # Enter the path
|
138 |
nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc.
|
139 |
docker build -t gpt-academic . # Install
|
140 |
|
|
|
164 |
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
165 |
|
166 |
2. Remote cloud server deployment (requires cloud server knowledge and experience)
|
167 |
+
Please visit [Deployment wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
168 |
|
169 |
3. Using WSL 2 (Windows subsystem for Linux)
|
170 |
+
Please visit [Deployment wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
171 |
|
172 |
4. How to run at a secondary URL (such as `http://localhost/subpath`)
|
173 |
Please visit [FastAPI operating instructions](docs/WithFastapi.md)
|
|
|
199 |
|
200 |
Write powerful function plugins to perform any task you want and can't think of.
|
201 |
The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided.
|
202 |
+
For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
203 |
|
204 |
---
|
205 |
# Latest Update
|
docs/README.md.Italian.md
CHANGED
@@ -13,7 +13,7 @@ Per tradurre questo progetto in qualsiasi lingua con GPT, leggere e eseguire [`m
|
|
13 |
>
|
14 |
> 1. Si prega di notare che solo i plugin (pulsanti) contrassegnati in **rosso** supportano la lettura di file, alcuni plugin sono posizionati nel **menu a discesa** nella zona dei plugin. Accettiamo e gestiamo PR per qualsiasi nuovo plugin con **massima priorità**!
|
15 |
>
|
16 |
-
> 2. Le funzionalità di ogni file di questo progetto sono descritte dettagliatamente nella propria analisi di autotraduzione [`self_analysis.md`](https://github.com/binary-husky/
|
17 |
>
|
18 |
> 3. Questo progetto è compatibile e incoraggia l'utilizzo di grandi modelli di linguaggio di produzione nazionale come chatglm, RWKV, Pangu ecc. Supporta la coesistenza di più api-key e può essere compilato nel file di configurazione come `API_KEY="openai-key1,openai-key2,api2d-key3"`. Per sostituire temporaneamente `API_KEY`, inserire `API_KEY` temporaneo nell'area di input e premere Invio per renderlo effettivo.
|
19 |
|
@@ -25,13 +25,13 @@ Correzione immediata | Supporta correzione immediata e ricerca degli errori di g
|
|
25 |
Traduzione cinese-inglese immediata | Traduzione cinese-inglese immediata con un solo clic
|
26 |
Spiegazione del codice immediata | Visualizzazione del codice, spiegazione del codice, generazione del codice, annotazione del codice con un solo clic
|
27 |
[Scorciatoie personalizzate](https://www.bilibili.com/video/BV14s4y1E7jN) | Supporta scorciatoie personalizzate
|
28 |
-
Design modularizzato | Supporta potenti [plugin di funzioni](https://github.com/binary-husky/
|
29 |
-
[Auto-profiling del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] [Comprensione immediata](https://github.com/binary-husky/
|
30 |
[Analisi del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] Un clic può analizzare l'albero di altri progetti Python/C/C++/Java/Lua/...
|
31 |
Lettura del documento, [traduzione](https://www.bilibili.com/video/BV1KT411x7Wn) del documento | [Plugin di funzioni] La lettura immediata dell'intero documento latex/pdf di un documento e la generazione di un riassunto
|
32 |
Traduzione completa di un documento Latex, [correzione immediata](https://www.bilibili.com/video/BV1FT411H7c5/) | [Plugin di funzioni] Una traduzione o correzione immediata di un documento Latex
|
33 |
Generazione di annotazioni in batch | [Plugin di funzioni] Generazione automatica delle annotazioni di funzione con un solo clic
|
34 |
-
[Traduzione cinese-inglese di Markdown](https://www.bilibili.com/video/BV1yo4y157jV/) | [Plugin di funzioni] Hai letto il [README](https://github.com/binary-husky/
|
35 |
Generazione di report di analisi di chat | [Plugin di funzioni] Generazione automatica di un rapporto di sintesi dopo l'esecuzione
|
36 |
[Funzione di traduzione di tutto il documento PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin di funzioni] Estrarre il titolo e il sommario dell'articolo PDF + tradurre l'intero testo (multithreading)
|
37 |
[Assistente di Arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugin di funzioni] Inserire l'URL dell'articolo di Arxiv e tradurre il sommario con un clic + scaricare il PDF
|
@@ -39,7 +39,7 @@ Generazione di report di analisi di chat | [Plugin di funzioni] Generazione auto
|
|
39 |
Aggregazione delle informazioni su Internet + GPT | [Plugin di funzioni] Fai in modo che GPT rilevi le informazioni su Internet prima di rispondere alle domande, senza mai diventare obsolete
|
40 |
Visualizzazione di formule/img/tabelle | È possibile visualizzare un'equazione in forma [tex e render](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png) contemporaneamente, supporta equazioni e evidenziazione del codice
|
41 |
Supporto per plugin di funzioni multithreading | Supporto per chiamata multithreaded di chatgpt, elaborazione con un clic di grandi quantità di testo o di un programma
|
42 |
-
Avvia il tema di gradio [scuro](https://github.com/binary-husky/
|
43 |
Supporto per maggiori modelli LLM, supporto API2D | Sentirsi serviti simultaneamente da GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) deve essere una grande sensazione, giusto?
|
44 |
Ulteriori modelli LLM supportat,i supporto per l'implementazione di Huggingface | Aggiunta di un'interfaccia Newbing (Nuovo Bing), introdotta la compatibilità con Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) e [PanGu-α](https://openi.org.cn/pangu/)
|
45 |
Ulteriori dimostrazioni di nuove funzionalità (generazione di immagini, ecc.)... | Vedere la fine di questo documento...
|
@@ -82,8 +82,8 @@ Ulteriori dimostrazioni di nuove funzionalità (generazione di immagini, ecc.)..
|
|
82 |
|
83 |
1. Scarica il progetto
|
84 |
```sh
|
85 |
-
git clone https://github.com/binary-husky/
|
86 |
-
cd
|
87 |
```
|
88 |
|
89 |
2. Configura API_KEY
|
@@ -139,8 +139,8 @@ python main.py
|
|
139 |
1. Solo ChatGPT (consigliato per la maggior parte delle persone)
|
140 |
|
141 |
``` sh
|
142 |
-
git clone https://github.com/binary-husky/
|
143 |
-
cd
|
144 |
nano config.py # con un qualsiasi editor di testo, modifica config.py configurando "Proxy", "API_KEY" e "WEB_PORT" (ad esempio 50923)
|
145 |
docker build -t gpt-academic . # installa
|
146 |
|
@@ -171,10 +171,10 @@ docker-compose up
|
|
171 |
Configura API_URL_REDIRECT seguendo le istruzioni nel file `config.py`.
|
172 |
|
173 |
2. Distribuzione su un server cloud remoto (richiede conoscenze ed esperienza di server cloud)
|
174 |
-
Si prega di visitare [wiki di distribuzione-1] (https://github.com/binary-husky/
|
175 |
|
176 |
3. Utilizzo di WSL2 (Windows Subsystem for Linux)
|
177 |
-
Si prega di visitare [wiki di distribuzione-2] (https://github.com/binary-husky/
|
178 |
|
179 |
4. Come far funzionare ChatGPT all'interno di un sottodominio (ad es. `http://localhost/subpath`)
|
180 |
Si prega di visitare [Istruzioni per l'esecuzione con FastAPI] (docs/WithFastapi.md)
|
@@ -206,7 +206,7 @@ ad esempio
|
|
206 |
2. Plugin di funzione personalizzati
|
207 |
|
208 |
Scrivi plugin di funzione personalizzati e esegui tutte le attività che desideri o non hai mai pensato di fare.
|
209 |
-
La difficoltà di scrittura e debug dei plugin del nostro progetto è molto bassa. Se si dispone di una certa conoscenza di base di Python, è possibile realizzare la propria funzione del plugin seguendo il nostro modello. Per maggiori dettagli, consultare la [guida al plugin per funzioni](https://github.com/binary-husky/
|
210 |
|
211 |
---
|
212 |
# Ultimo aggiornamento
|
|
|
13 |
>
|
14 |
> 1. Si prega di notare che solo i plugin (pulsanti) contrassegnati in **rosso** supportano la lettura di file, alcuni plugin sono posizionati nel **menu a discesa** nella zona dei plugin. Accettiamo e gestiamo PR per qualsiasi nuovo plugin con **massima priorità**!
|
15 |
>
|
16 |
+
> 2. Le funzionalità di ogni file di questo progetto sono descritte dettagliatamente nella propria analisi di autotraduzione [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Con l'iterazione delle versioni, è possibile fare clic sui plugin funzionali correlati in qualsiasi momento per richiamare GPT e generare nuovamente il rapporto di analisi automatica del progetto. Le domande frequenti sono riassunte nella [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Metodo di installazione] (#installazione).
|
17 |
>
|
18 |
> 3. Questo progetto è compatibile e incoraggia l'utilizzo di grandi modelli di linguaggio di produzione nazionale come chatglm, RWKV, Pangu ecc. Supporta la coesistenza di più api-key e può essere compilato nel file di configurazione come `API_KEY="openai-key1,openai-key2,api2d-key3"`. Per sostituire temporaneamente `API_KEY`, inserire `API_KEY` temporaneo nell'area di input e premere Invio per renderlo effettivo.
|
19 |
|
|
|
25 |
Traduzione cinese-inglese immediata | Traduzione cinese-inglese immediata con un solo clic
|
26 |
Spiegazione del codice immediata | Visualizzazione del codice, spiegazione del codice, generazione del codice, annotazione del codice con un solo clic
|
27 |
[Scorciatoie personalizzate](https://www.bilibili.com/video/BV14s4y1E7jN) | Supporta scorciatoie personalizzate
|
28 |
+
Design modularizzato | Supporta potenti [plugin di funzioni](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions) personalizzati, i plugin supportano l'[aggiornamento in tempo reale](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
29 |
+
[Auto-profiling del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] [Comprensione immediata](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) del codice sorgente di questo progetto
|
30 |
[Analisi del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] Un clic può analizzare l'albero di altri progetti Python/C/C++/Java/Lua/...
|
31 |
Lettura del documento, [traduzione](https://www.bilibili.com/video/BV1KT411x7Wn) del documento | [Plugin di funzioni] La lettura immediata dell'intero documento latex/pdf di un documento e la generazione di un riassunto
|
32 |
Traduzione completa di un documento Latex, [correzione immediata](https://www.bilibili.com/video/BV1FT411H7c5/) | [Plugin di funzioni] Una traduzione o correzione immediata di un documento Latex
|
33 |
Generazione di annotazioni in batch | [Plugin di funzioni] Generazione automatica delle annotazioni di funzione con un solo clic
|
34 |
+
[Traduzione cinese-inglese di Markdown](https://www.bilibili.com/video/BV1yo4y157jV/) | [Plugin di funzioni] Hai letto il [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) delle cinque lingue sopra?
|
35 |
Generazione di report di analisi di chat | [Plugin di funzioni] Generazione automatica di un rapporto di sintesi dopo l'esecuzione
|
36 |
[Funzione di traduzione di tutto il documento PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin di funzioni] Estrarre il titolo e il sommario dell'articolo PDF + tradurre l'intero testo (multithreading)
|
37 |
[Assistente di Arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugin di funzioni] Inserire l'URL dell'articolo di Arxiv e tradurre il sommario con un clic + scaricare il PDF
|
|
|
39 |
Aggregazione delle informazioni su Internet + GPT | [Plugin di funzioni] Fai in modo che GPT rilevi le informazioni su Internet prima di rispondere alle domande, senza mai diventare obsolete
|
40 |
Visualizzazione di formule/img/tabelle | È possibile visualizzare un'equazione in forma [tex e render](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png) contemporaneamente, supporta equazioni e evidenziazione del codice
|
41 |
Supporto per plugin di funzioni multithreading | Supporto per chiamata multithreaded di chatgpt, elaborazione con un clic di grandi quantità di testo o di un programma
|
42 |
+
Avvia il tema di gradio [scuro](https://github.com/binary-husky/gpt_academic/issues/173) | Aggiungere ```/?__theme=dark``` dopo l'URL del browser per passare a un tema scuro
|
43 |
Supporto per maggiori modelli LLM, supporto API2D | Sentirsi serviti simultaneamente da GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) deve essere una grande sensazione, giusto?
|
44 |
Ulteriori modelli LLM supportat,i supporto per l'implementazione di Huggingface | Aggiunta di un'interfaccia Newbing (Nuovo Bing), introdotta la compatibilità con Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) e [PanGu-α](https://openi.org.cn/pangu/)
|
45 |
Ulteriori dimostrazioni di nuove funzionalità (generazione di immagini, ecc.)... | Vedere la fine di questo documento...
|
|
|
82 |
|
83 |
1. Scarica il progetto
|
84 |
```sh
|
85 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
86 |
+
cd gpt_academic
|
87 |
```
|
88 |
|
89 |
2. Configura API_KEY
|
|
|
139 |
1. Solo ChatGPT (consigliato per la maggior parte delle persone)
|
140 |
|
141 |
``` sh
|
142 |
+
git clone https://github.com/binary-husky/gpt_academic.git # scarica il progetto
|
143 |
+
cd gpt_academic # entra nel percorso
|
144 |
nano config.py # con un qualsiasi editor di testo, modifica config.py configurando "Proxy", "API_KEY" e "WEB_PORT" (ad esempio 50923)
|
145 |
docker build -t gpt-academic . # installa
|
146 |
|
|
|
171 |
Configura API_URL_REDIRECT seguendo le istruzioni nel file `config.py`.
|
172 |
|
173 |
2. Distribuzione su un server cloud remoto (richiede conoscenze ed esperienza di server cloud)
|
174 |
+
Si prega di visitare [wiki di distribuzione-1] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
175 |
|
176 |
3. Utilizzo di WSL2 (Windows Subsystem for Linux)
|
177 |
+
Si prega di visitare [wiki di distribuzione-2] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
178 |
|
179 |
4. Come far funzionare ChatGPT all'interno di un sottodominio (ad es. `http://localhost/subpath`)
|
180 |
Si prega di visitare [Istruzioni per l'esecuzione con FastAPI] (docs/WithFastapi.md)
|
|
|
206 |
2. Plugin di funzione personalizzati
|
207 |
|
208 |
Scrivi plugin di funzione personalizzati e esegui tutte le attività che desideri o non hai mai pensato di fare.
|
209 |
+
La difficoltà di scrittura e debug dei plugin del nostro progetto è molto bassa. Se si dispone di una certa conoscenza di base di Python, è possibile realizzare la propria funzione del plugin seguendo il nostro modello. Per maggiori dettagli, consultare la [guida al plugin per funzioni](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
210 |
|
211 |
---
|
212 |
# Ultimo aggiornamento
|
docs/README.md.Korean.md
CHANGED
@@ -13,7 +13,7 @@ GPT를 이용하여 프로젝트를 임의의 언어로 번역하려면 [`multi_
|
|
13 |
>
|
14 |
> 1. 파일을 읽기 위해 **빨간색**으로 표시된 기능 플러그인 (버튼) 만 지원됩니다. 일부 플러그인은 플러그인 영역의 **드롭다운 메뉴**에 있습니다. 또한 새로운 플러그인은 **가장 높은 우선순위**로 환영하며 처리합니다!
|
15 |
>
|
16 |
-
> 2. 이 프로젝트의 각 파일의 기능을 [`self_analysis.md`](https://github.com/binary-husky/
|
17 |
>
|
18 |
> 3. 이 프로젝트는 국내 언어 모델 chatglm과 RWKV, 판고 등의 시도와 호환 가능합니다. 여러 개의 api-key를 지원하며 설정 파일에 "API_KEY="openai-key1,openai-key2,api2d-key3""와 같이 작성할 수 있습니다. `API_KEY`를 임시로 변경해야하는 경우 입력 영역에 임시 `API_KEY`를 입력 한 후 엔터 키를 누르면 즉시 적용됩니다.
|
19 |
|
@@ -25,13 +25,13 @@ GPT를 이용하여 프로젝트를 임의의 언어로 번역하려면 [`multi_
|
|
25 |
한-영 키워드 | 한-영 키워드 지원
|
26 |
코드 설명 | 코드 표시, 코드 설명, 코드 생성, 코드에 주석 추가
|
27 |
[사용자 정의 바로 가기 키](https://www.bilibili.com/video/BV14s4y1E7jN) | 사용자 정의 바로 가기 키 지원
|
28 |
-
모듈식 설계 | 강력한[함수 플러그인](https://github.com/binary-husky/
|
29 |
[자체 프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] [원 키 우드] 프로젝트 소스 코드의 내용을 이해하는 기능을 제공
|
30 |
[프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] 프로젝트 트리를 분석할 수 있습니다 (Python/C/C++/Java/Lua/...)
|
31 |
논문 읽기, 번역 | [함수 플러그인] LaTex/PDF 논문의 전문을 읽고 요약을 생성합니다.
|
32 |
LaTeX 텍스트[번역](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [원 키워드](https://www.bilibili.com/video/BV1FT411H7c5/) | [함수 플러그인] LaTeX 논문의 번역 또는 개량을 위해 일련의 모드를 번역할 수 있습니다.
|
33 |
대량의 주석 생성 | [함수 플러그인] 함수 코멘트를 대량으로 생성할 수 있습니다.
|
34 |
-
Markdown 한-영 번역 | [함수 플러그인] 위의 5 종 언어의 [README](https://github.com/binary-husky/
|
35 |
chat 분석 보고서 생성 | [함수 플러그인] 수행 후 요약 보고서를 자동으로 생성합니다.
|
36 |
[PDF 논문 번역](https://www.bilibili.com/video/BV1KT411x7Wn) | [함수 플러그인] PDF 논문이 제목 및 요약을 추출한 후 번역됩니다. (멀티 스레드)
|
37 |
[Arxiv 도우미](https://www.bilibili.com/video/BV1LM4y1279X) | [함수 플러그인] Arxiv 논문 URL을 입력하면 요약을 번역하고 PDF를 다운로드 할 수 있습니다.
|
@@ -73,8 +73,8 @@ LLM 모델 추가 및[huggingface 배치](https://huggingface.co/spaces/qingxu98
|
|
73 |
|
74 |
1. 프로젝트 다운로드
|
75 |
```sh
|
76 |
-
git clone https://github.com/binary-husky/
|
77 |
-
cd
|
78 |
```
|
79 |
|
80 |
2. API_KEY 구성
|
@@ -134,8 +134,8 @@ python main.py
|
|
134 |
1. ChatGPT 만 (대부분의 사람들이 선택하는 것을 권장합니다.)
|
135 |
|
136 |
``` sh
|
137 |
-
git clone https://github.com/binary-husky/
|
138 |
-
cd
|
139 |
nano config.py # 아무 텍스트 에디터로 config.py를 열고 "Proxy","API_KEY","WEB_PORT" (예 : 50923) 등을 구성합니다.
|
140 |
docker build -t gpt-academic . # 설치
|
141 |
|
@@ -165,10 +165,10 @@ docker-compose up
|
|
165 |
API_URL_REDIRECT를 `config.py`에 따라 구성하면됩니다.
|
166 |
|
167 |
2. 원격 클라우드 서버 배치 (클라우드 서버 지식과 경험이 필요합니다.)
|
168 |
-
[배치위키-1](https://github.com/binary-husky/
|
169 |
|
170 |
3. WSL2 사용 (Windows Subsystem for Linux 하위 시스템)
|
171 |
-
[배치 위키-2](https://github.com/binary-husky/
|
172 |
|
173 |
4. 2 차 URL (예 : `http : //localhost/subpath`)에서 실행하는 방법
|
174 |
[FastAPI 실행 설명서] (docs / WithFastapi.md)를 참조하십시오.
|
@@ -197,7 +197,7 @@ docker-compose.yml을 읽은 후 지시 사항에 따라 작업하십시오.
|
|
197 |
|
198 |
2. 사용자 지정 함수 플러그인
|
199 |
강력한 함수 플러그인을 작성하여 원하는 작업을 수행하십시오.
|
200 |
-
이 프로젝트의 플러그인 작성 및 디버깅 난이도는 매우 낮으며, 일부 파이썬 기본 지식만 있으면 제공된 템플릿을 모방하여 플러그인 기능을 구현할 수 있습니다. 자세한 내용은 [함수 플러그인 가이드]를 참조하십시오. (https://github.com/binary -husky/
|
201 |
---
|
202 |
# 최신 업데이트
|
203 |
## 새로운 기능 동향1. 대화 저장 기능.
|
|
|
13 |
>
|
14 |
> 1. 파일을 읽기 위해 **빨간색**으로 표시된 기능 플러그인 (버튼) 만 지원됩니다. 일부 플러그인은 플러그인 영역의 **드롭다운 메뉴**에 있습니다. 또한 새로운 플러그인은 **가장 높은 우선순위**로 환영하며 처리합니다!
|
15 |
>
|
16 |
+
> 2. 이 프로젝트의 각 파일의 기능을 [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)에서 자세히 설명합니다. 버전이 업데이트 됨에 따라 관련된 기능 플러그인을 클릭하고 GPT를 호출하여 프로젝트의 자체 분석 보고서를 다시 생성할 수도 있습니다. 자주 묻는 질문은 [`위키`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)에서 볼 수 있습니다. [설치 방법](#installation).
|
17 |
>
|
18 |
> 3. 이 프로젝트는 국내 언어 모델 chatglm과 RWKV, 판고 등의 시도와 호환 가능합니다. 여러 개의 api-key를 지원하며 설정 파일에 "API_KEY="openai-key1,openai-key2,api2d-key3""와 같이 작성할 수 있습니다. `API_KEY`를 임시로 변경해야하는 경우 입력 영역에 임시 `API_KEY`를 입력 한 후 엔터 키를 누르면 즉시 적용됩니다.
|
19 |
|
|
|
25 |
한-영 키워드 | 한-영 키워드 지원
|
26 |
코드 설명 | 코드 표시, 코드 설명, 코드 생성, 코드에 주석 추가
|
27 |
[사용자 정의 바로 가기 키](https://www.bilibili.com/video/BV14s4y1E7jN) | 사용자 정의 바로 가기 키 지원
|
28 |
+
모듈식 설계 | 강력한[함수 플러그인](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions) 지원, 플러그인이 [램 업데이트](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)를 지원합니다.
|
29 |
[자체 프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] [원 키 우드] 프로젝트 소스 코드의 내용을 이해하는 기능을 제공
|
30 |
[프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] 프로젝트 트리를 분석할 수 있습니다 (Python/C/C++/Java/Lua/...)
|
31 |
논문 읽기, 번역 | [함수 플러그인] LaTex/PDF 논문의 전문을 읽고 요약을 생성합니다.
|
32 |
LaTeX 텍스트[번역](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [원 키워드](https://www.bilibili.com/video/BV1FT411H7c5/) | [함수 플러그인] LaTeX 논문의 번역 또는 개량을 위해 일련의 모드를 번역할 수 있습니다.
|
33 |
대량의 주석 생성 | [함수 플러그인] 함수 코멘트를 대량으로 생성할 수 있습니다.
|
34 |
+
Markdown 한-영 번역 | [함수 플러그인] 위의 5 종 언어의 [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)를 볼 수 있습니다.
|
35 |
chat 분석 보고서 생성 | [함수 플러그인] 수행 후 요약 보고서를 자동으로 생성합니다.
|
36 |
[PDF 논문 번역](https://www.bilibili.com/video/BV1KT411x7Wn) | [함수 플러그인] PDF 논문이 제목 및 요약을 추출한 후 번역됩니다. (멀티 스레드)
|
37 |
[Arxiv 도우미](https://www.bilibili.com/video/BV1LM4y1279X) | [함수 플러그인] Arxiv 논문 URL을 입력하면 요약을 번역하고 PDF를 다운로드 할 수 있습니다.
|
|
|
73 |
|
74 |
1. 프로젝트 다운로드
|
75 |
```sh
|
76 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
77 |
+
cd gpt_academic
|
78 |
```
|
79 |
|
80 |
2. API_KEY 구성
|
|
|
134 |
1. ChatGPT 만 (대부분의 사람들이 선택하는 것을 권장합니다.)
|
135 |
|
136 |
``` sh
|
137 |
+
git clone https://github.com/binary-husky/gpt_academic.git # 다운로드
|
138 |
+
cd gpt_academic # 경로 이동
|
139 |
nano config.py # 아무 텍스트 에디터로 config.py를 열고 "Proxy","API_KEY","WEB_PORT" (예 : 50923) 등을 구성합니다.
|
140 |
docker build -t gpt-academic . # 설치
|
141 |
|
|
|
165 |
API_URL_REDIRECT를 `config.py`에 따라 구성하면됩니다.
|
166 |
|
167 |
2. 원격 클라우드 서버 배치 (클라우드 서버 지식과 경험이 필요합니다.)
|
168 |
+
[배치위키-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)에 방문하십시오.
|
169 |
|
170 |
3. WSL2 사용 (Windows Subsystem for Linux 하위 시스템)
|
171 |
+
[배치 위키-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)에 방문하십시오.
|
172 |
|
173 |
4. 2 차 URL (예 : `http : //localhost/subpath`)에서 실행하는 방법
|
174 |
[FastAPI 실행 설명서] (docs / WithFastapi.md)를 참조하십시오.
|
|
|
197 |
|
198 |
2. 사용자 지정 함수 플러그인
|
199 |
강력한 함수 플러그인을 작성하여 원하는 작업을 수행하십시오.
|
200 |
+
이 프로젝트의 플러그인 작성 및 디버깅 난이도는 매우 낮으며, 일부 파이썬 기본 지식만 있으면 제공된 템플릿을 모방하여 플러그인 기능을 구현할 수 있습니다. 자세한 내용은 [함수 플러그인 가이드]를 참조하십시오. (https://github.com/binary -husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E 4%BB%B6%E6%8C%87%E5%8D%97).
|
201 |
---
|
202 |
# 최신 업데이트
|
203 |
## 새로운 기능 동향1. 대화 저장 기능.
|
docs/README.md.Portuguese.md
CHANGED
@@ -14,7 +14,7 @@ Para traduzir este projeto para qualquer idioma com o GPT, leia e execute [`mult
|
|
14 |
>
|
15 |
> 1. Por favor, preste atenção que somente os plugins de funções (botões) com a cor **vermelha** podem ler arquivos. Alguns plugins estão localizados no **menu suspenso** na área de plugins. Além disso, nós damos as boas-vindas com a **maior prioridade** e gerenciamos quaisquer novos plugins PR!
|
16 |
>
|
17 |
-
> 2. As funções de cada arquivo neste projeto são detalhadas em [`self_analysis.md`](https://github.com/binary-husky/
|
18 |
>
|
19 |
> 3. Este projeto é compatível com e incentiva o uso de modelos de linguagem nacionais, como chatglm e RWKV, Pangolin, etc. Suporta a coexistência de várias chaves de API e pode ser preenchido no arquivo de configuração como `API_KEY="openai-key1,openai-key2,api2d-key3"`. Quando precisar alterar temporariamente o `API_KEY`, basta digitar o `API_KEY` temporário na área de entrada e pressionar Enter para que ele entre em vigor.
|
20 |
|
@@ -26,8 +26,8 @@ Um clique de polimento | Suporte a um clique polimento, um clique encontrar erro
|
|
26 |
Tradução chinês-inglês de um clique | Tradução chinês-inglês de um clique
|
27 |
Explicação de código de um único clique | Exibir código, explicar código, gerar código, adicionar comentários ao código
|
28 |
[Teclas de atalho personalizadas](https://www.bilibili.com/video/BV14s4y1E7jN) | Suporte a atalhos personalizados
|
29 |
-
Projeto modular | Suporte para poderosos plugins[de função personalizada](https://github.com/binary-husky/
|
30 |
-
[Análise automática do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função][um clique para entender](https://github.com/binary-husky/
|
31 |
[Análise do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função] Um clique pode analisar a árvore de projetos do Python/C/C++/Java/Lua/...
|
32 |
Leitura de artigos, [tradução](https://www.bilibili.com/video/BV1KT411x7Wn) de artigos | [Plugin de função] um clique para interpretar o resumo de artigos LaTeX/PDF e gerar resumo
|
33 |
Tradução completa LATEX, polimento|[Plugin de função] Uma clique para traduzir ou polir um artigo LATEX
|
@@ -91,8 +91,8 @@ Mais recursos novos mostrados (geração de imagens, etc.) ... | Consulte o fina
|
|
91 |
1. Download the project
|
92 |
|
93 |
```sh
|
94 |
-
git clone https://github.com/binary-husky/
|
95 |
-
cd
|
96 |
```
|
97 |
|
98 |
2. Configure the API KEY
|
@@ -149,8 +149,8 @@ python main.py
|
|
149 |
1. Apenas ChatGPT (recomendado para a maioria das pessoas)
|
150 |
|
151 |
``` sh
|
152 |
-
git clone https://github.com/binary-husky/
|
153 |
-
cd
|
154 |
nano config.py # Editar config.py com qualquer editor de texto configurando "Proxy", "API_KEY" e "WEB_PORT" (por exemplo, 50923), etc.
|
155 |
docker build -t gpt-academic . # Instale
|
156 |
|
@@ -180,10 +180,10 @@ docker-compose up
|
|
180 |
Basta configurar o API_URL_REDIRECT de acordo com as instruções em `config.py`.
|
181 |
|
182 |
2. Implantação em servidores em nuvem remotos (requer conhecimento e experiência de servidores em nuvem)
|
183 |
-
Acesse [Wiki de implementação remota do servidor em nuvem](https://github.com/binary-husky/
|
184 |
|
185 |
3. Usando a WSL2 (sub-sistema do Windows para Linux)
|
186 |
-
Acesse [Wiki da implantação da WSL2](https://github.com/binary-husky/
|
187 |
|
188 |
4. Como executar em um subdiretório (ex. `http://localhost/subpath`)
|
189 |
Acesse [Instruções de execução FastAPI](docs/WithFastapi.md)
|
@@ -214,7 +214,7 @@ Por exemplo,
|
|
214 |
|
215 |
Escreva plug-ins de função poderosos para executar tarefas que você deseja e não pensava possível.
|
216 |
A dificuldade geral de escrever e depurar plug-ins neste projeto é baixa e, se você tem algum conhecimento básico de python, pode implementar suas próprias funções sobre o modelo que fornecemos.
|
217 |
-
Para mais detalhes, consulte o [Guia do plug-in de função.](https://github.com/binary-husky/
|
218 |
|
219 |
---
|
220 |
# Última atualização
|
|
|
14 |
>
|
15 |
> 1. Por favor, preste atenção que somente os plugins de funções (botões) com a cor **vermelha** podem ler arquivos. Alguns plugins estão localizados no **menu suspenso** na área de plugins. Além disso, nós damos as boas-vindas com a **maior prioridade** e gerenciamos quaisquer novos plugins PR!
|
16 |
>
|
17 |
+
> 2. As funções de cada arquivo neste projeto são detalhadas em [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A), auto-análises do projeto geradas pelo GPT também estão podem ser chamadas a qualquer momento ao clicar nos plugins relacionados. As perguntas frequentes estão resumidas no [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Instruções de Instalação](#installation).
|
18 |
>
|
19 |
> 3. Este projeto é compatível com e incentiva o uso de modelos de linguagem nacionais, como chatglm e RWKV, Pangolin, etc. Suporta a coexistência de várias chaves de API e pode ser preenchido no arquivo de configuração como `API_KEY="openai-key1,openai-key2,api2d-key3"`. Quando precisar alterar temporariamente o `API_KEY`, basta digitar o `API_KEY` temporário na área de entrada e pressionar Enter para que ele entre em vigor.
|
20 |
|
|
|
26 |
Tradução chinês-inglês de um clique | Tradução chinês-inglês de um clique
|
27 |
Explicação de código de um único clique | Exibir código, explicar código, gerar código, adicionar comentários ao código
|
28 |
[Teclas de atalho personalizadas](https://www.bilibili.com/video/BV14s4y1E7jN) | Suporte a atalhos personalizados
|
29 |
+
Projeto modular | Suporte para poderosos plugins[de função personalizada](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), os plugins suportam[hot-reload](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
30 |
+
[Análise automática do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função][um clique para entender](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) o código-fonte do projeto
|
31 |
[Análise do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função] Um clique pode analisar a árvore de projetos do Python/C/C++/Java/Lua/...
|
32 |
Leitura de artigos, [tradução](https://www.bilibili.com/video/BV1KT411x7Wn) de artigos | [Plugin de função] um clique para interpretar o resumo de artigos LaTeX/PDF e gerar resumo
|
33 |
Tradução completa LATEX, polimento|[Plugin de função] Uma clique para traduzir ou polir um artigo LATEX
|
|
|
91 |
1. Download the project
|
92 |
|
93 |
```sh
|
94 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
95 |
+
cd gpt_academic
|
96 |
```
|
97 |
|
98 |
2. Configure the API KEY
|
|
|
149 |
1. Apenas ChatGPT (recomendado para a maioria das pessoas)
|
150 |
|
151 |
``` sh
|
152 |
+
git clone https://github.com/binary-husky/gpt_academic.git # Baixar o projeto
|
153 |
+
cd gpt_academic # Entrar no caminho
|
154 |
nano config.py # Editar config.py com qualquer editor de texto configurando "Proxy", "API_KEY" e "WEB_PORT" (por exemplo, 50923), etc.
|
155 |
docker build -t gpt-academic . # Instale
|
156 |
|
|
|
180 |
Basta configurar o API_URL_REDIRECT de acordo com as instruções em `config.py`.
|
181 |
|
182 |
2. Implantação em servidores em nuvem remotos (requer conhecimento e experiência de servidores em nuvem)
|
183 |
+
Acesse [Wiki de implementação remota do servidor em nuvem](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
184 |
|
185 |
3. Usando a WSL2 (sub-sistema do Windows para Linux)
|
186 |
+
Acesse [Wiki da implantação da WSL2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
187 |
|
188 |
4. Como executar em um subdiretório (ex. `http://localhost/subpath`)
|
189 |
Acesse [Instruções de execução FastAPI](docs/WithFastapi.md)
|
|
|
214 |
|
215 |
Escreva plug-ins de função poderosos para executar tarefas que você deseja e não pensava possível.
|
216 |
A dificuldade geral de escrever e depurar plug-ins neste projeto é baixa e, se você tem algum conhecimento básico de python, pode implementar suas próprias funções sobre o modelo que fornecemos.
|
217 |
+
Para mais detalhes, consulte o [Guia do plug-in de função.](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
218 |
|
219 |
---
|
220 |
# Última atualização
|
docs/README_EN.md
CHANGED
@@ -14,7 +14,7 @@ To translate this project to arbitary language with GPT, read and run [`multi_la
|
|
14 |
> Note:
|
15 |
>
|
16 |
> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**!
|
17 |
-
> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/
|
18 |
> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect.
|
19 |
|
20 |
<div align="center">
|
@@ -25,13 +25,13 @@ One-click polishing | Supports one-click polishing and one-click searching for g
|
|
25 |
One-click Chinese-English translation | One-click Chinese-English translation.
|
26 |
One-click code interpretation | Displays, explains, generates, and adds comments to code.
|
27 |
[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
|
28 |
-
Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/
|
29 |
-
[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/
|
30 |
[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/...
|
31 |
Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts.
|
32 |
Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers.
|
33 |
Batch annotation generation | [Function plug-in] One-click batch generation of function annotations.
|
34 |
-
Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/
|
35 |
Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running.
|
36 |
[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded)
|
37 |
[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click.
|
@@ -39,7 +39,7 @@ Chat analysis report generation | [Function plug-in] Automatically generate summ
|
|
39 |
Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated.
|
40 |
Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting.
|
41 |
Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click.
|
42 |
-
Start Dark Gradio [theme](https://github.com/binary-husky/
|
43 |
[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right?
|
44 |
More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/)
|
45 |
More new feature displays (image generation, etc.)…… | See the end of this document for more...
|
@@ -79,8 +79,8 @@ More new feature displays (image generation, etc.)…… | See the end of this d
|
|
79 |
|
80 |
1. Download the project
|
81 |
```sh
|
82 |
-
git clone https://github.com/binary-husky/
|
83 |
-
cd
|
84 |
```
|
85 |
|
86 |
2. Configure the API_KEY
|
@@ -136,8 +136,8 @@ python main.py
|
|
136 |
1. ChatGPT Only (Recommended for Most People)
|
137 |
|
138 |
``` sh
|
139 |
-
git clone https://github.com/binary-husky/
|
140 |
-
cd
|
141 |
nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc.
|
142 |
docker build -t gpt-academic . # Install
|
143 |
|
@@ -167,10 +167,10 @@ docker-compose up
|
|
167 |
Configure API_URL_REDIRECT according to the instructions in 'config.py'.
|
168 |
|
169 |
2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers)
|
170 |
-
Please visit [Deployment Wiki-1](https://github.com/binary-husky/
|
171 |
|
172 |
3. Using WSL2 (Windows Subsystem for Linux)
|
173 |
-
Please visit [Deployment Wiki-2](https://github.com/binary-husky/
|
174 |
|
175 |
4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`)
|
176 |
Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
|
@@ -202,7 +202,7 @@ For example,
|
|
202 |
|
203 |
Write powerful function plugins to perform any task you can think of, even those you cannot think of.
|
204 |
The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide.
|
205 |
-
For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/
|
206 |
|
207 |
---
|
208 |
# Latest Update
|
|
|
14 |
> Note:
|
15 |
>
|
16 |
> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**!
|
17 |
+
> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation).
|
18 |
> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect.
|
19 |
|
20 |
<div align="center">
|
|
|
25 |
One-click Chinese-English translation | One-click Chinese-English translation.
|
26 |
One-click code interpretation | Displays, explains, generates, and adds comments to code.
|
27 |
[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
|
28 |
+
Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
29 |
+
[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project
|
30 |
[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/...
|
31 |
Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts.
|
32 |
Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers.
|
33 |
Batch annotation generation | [Function plug-in] One-click batch generation of function annotations.
|
34 |
+
Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in the five languages above?
|
35 |
Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running.
|
36 |
[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded)
|
37 |
[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click.
|
|
|
39 |
Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated.
|
40 |
Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting.
|
41 |
Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click.
|
42 |
+
Start Dark Gradio [theme](https://github.com/binary-husky/gpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme.
|
43 |
[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right?
|
44 |
More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/)
|
45 |
More new feature displays (image generation, etc.)…… | See the end of this document for more...
|
|
|
79 |
|
80 |
1. Download the project
|
81 |
```sh
|
82 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
83 |
+
cd gpt_academic
|
84 |
```
|
85 |
|
86 |
2. Configure the API_KEY
|
|
|
136 |
1. ChatGPT Only (Recommended for Most People)
|
137 |
|
138 |
``` sh
|
139 |
+
git clone https://github.com/binary-husky/gpt_academic.git # Download project
|
140 |
+
cd gpt_academic # Enter path
|
141 |
nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc.
|
142 |
docker build -t gpt-academic . # Install
|
143 |
|
|
|
167 |
Configure API_URL_REDIRECT according to the instructions in 'config.py'.
|
168 |
|
169 |
2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers)
|
170 |
+
Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
171 |
|
172 |
3. Using WSL2 (Windows Subsystem for Linux)
|
173 |
+
Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
174 |
|
175 |
4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`)
|
176 |
Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
|
|
|
202 |
|
203 |
Write powerful function plugins to perform any task you can think of, even those you cannot think of.
|
204 |
The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide.
|
205 |
+
For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
206 |
|
207 |
---
|
208 |
# Latest Update
|
docs/README_FR.md
CHANGED
@@ -16,7 +16,7 @@ Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez
|
|
16 |
>
|
17 |
> 1. Veuillez noter que seuls les plugins de fonctions (boutons) **en rouge** prennent en charge la lecture de fichiers. Certains plugins se trouvent dans le **menu déroulant** de la zone de plugins. De plus, nous accueillons et traitons les nouvelles pull requests pour les plugins avec **la plus haute priorité**!
|
18 |
>
|
19 |
-
> 2. Les fonctions de chaque fichier de ce projet sont expliquées en détail dans l'auto-analyse [`self_analysis.md`](https://github.com/binary-husky/
|
20 |
>
|
21 |
> 3. Ce projet est compatible avec et encourage l'utilisation de grands modèles de langage nationaux tels que chatglm, RWKV, Pangu, etc. La coexistence de plusieurs clés API est prise en charge et peut être remplie dans le fichier de configuration, tel que `API_KEY="openai-key1,openai-key2,api2d-key3"`. Lorsque vous souhaitez remplacer temporairement `API_KEY`, saisissez temporairement `API_KEY` dans la zone de saisie, puis appuyez sur Entrée pour soumettre et activer.
|
22 |
|
@@ -28,13 +28,13 @@ Révision en un clic | prend en charge la révision en un clic et la recherche d
|
|
28 |
Traduction chinois-anglais en un clic | Traduction chinois-anglais en un clic
|
29 |
Explication de code en un clic | Affichage, explication, génération et ajout de commentaires de code
|
30 |
[Raccourcis personnalisés](https://www.bilibili.com/video/BV14s4y1E7jN) | prend en charge les raccourcis personnalisés
|
31 |
-
Conception modulaire | prend en charge de puissants plugins de fonction personnalisée, les plugins prennent en charge la [mise à jour à chaud](https://github.com/binary-husky/
|
32 |
-
[Autoscanner](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] [Compréhension instantanée](https://github.com/binary-husky/
|
33 |
[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] Analyse en un clic de la structure d'autres projets Python / C / C ++ / Java / Lua / ...
|
34 |
Lecture d'articles, [traduction](https://www.bilibili.com/video/BV1KT411x7Wn) d'articles | [Plug-in de fonction] Compréhension instantanée de l'article latex / pdf complet et génération de résumés
|
35 |
[Traduction](https://www.bilibili.com/video/BV1nk4y1Y7Js/) et [révision](https://www.bilibili.com/video/BV1FT411H7c5/) complets en latex | [Plug-in de fonction] traduction ou révision en un clic d'articles en latex
|
36 |
Génération de commentaires en masse | [Plug-in de fonction] Génération en un clic de commentaires de fonction en masse
|
37 |
-
Traduction [chinois-anglais](https://www.bilibili.com/video/BV1yo4y157jV/) en Markdown | [Plug-in de fonction] avez-vous vu la [README](https://github.com/binary-husky/
|
38 |
Génération de rapports d'analyse de chat | [Plug-in de fonction] Génère automatiquement un rapport de résumé après l'exécution
|
39 |
[Traduction intégrale en pdf](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plug-in de fonction] Extraction de titre et de résumé de l'article pdf + traduction intégrale (multi-thread)
|
40 |
[Aide à arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plug-in de fonction] Entrer l'url de l'article arxiv pour traduire et télécharger le résumé en un clic
|
@@ -84,8 +84,8 @@ Plus de nouvelles fonctionnalités (génération d'images, etc.) ... | Voir la f
|
|
84 |
|
85 |
1. Télécharger le projet
|
86 |
```sh
|
87 |
-
git clone https://github.com/binary-husky/
|
88 |
-
cd
|
89 |
```
|
90 |
|
91 |
2. Configuration de la clé API
|
@@ -141,8 +141,8 @@ python main.py
|
|
141 |
1. ChatGPT uniquement (recommandé pour la plupart des gens)
|
142 |
|
143 |
``` sh
|
144 |
-
git clone https://github.com/binary-husky/
|
145 |
-
cd
|
146 |
nano config.py # Editez config.py avec n'importe quel éditeur de texte en configurant "Proxy", "API_KEY" et "WEB_PORT" (p. ex. 50923)
|
147 |
docker build -t gpt-academic . # Installer
|
148 |
|
@@ -172,10 +172,10 @@ docker-compose up
|
|
172 |
Configurez simplement API_URL_REDIRECT selon les instructions de config.py.
|
173 |
|
174 |
2. Déploiement distant sur un serveur cloud (connaissance et expérience des serveurs cloud requises)
|
175 |
-
Veuillez consulter [Wiki de déploiement-1] (https://github.com/binary-husky/
|
176 |
|
177 |
3. Utilisation de WSL2 (sous-système Windows pour Linux)
|
178 |
-
Veuillez consulter [Wiki de déploiement-2] (https://github.com/binary-husky/
|
179 |
|
180 |
4. Comment exécuter sous un sous-répertoire (tel que `http://localhost/subpath`)
|
181 |
Veuillez consulter les [instructions d'exécution de FastAPI] (docs/WithFastapi.md).
|
@@ -206,7 +206,7 @@ Par exemple
|
|
206 |
|
207 |
Écrivez des plugins de fonctions puissants pour effectuer toutes les tâches que vous souhaitez ou que vous ne pouvez pas imaginer.
|
208 |
Les plugins de ce projet ont une difficulté de programmation et de débogage très faible. Si vous avez des connaissances de base en Python, vous pouvez simuler la fonctionnalité de votre propre plugin en suivant le modèle que nous avons fourni.
|
209 |
-
Veuillez consulter le [Guide du plugin de fonction] (https://github.com/binary-husky/
|
210 |
|
211 |
---
|
212 |
# Latest Update
|
|
|
16 |
>
|
17 |
> 1. Veuillez noter que seuls les plugins de fonctions (boutons) **en rouge** prennent en charge la lecture de fichiers. Certains plugins se trouvent dans le **menu déroulant** de la zone de plugins. De plus, nous accueillons et traitons les nouvelles pull requests pour les plugins avec **la plus haute priorité**!
|
18 |
>
|
19 |
+
> 2. Les fonctions de chaque fichier de ce projet sont expliquées en détail dans l'auto-analyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins de fonctions pertinents et appeler GPT pour régénérer le rapport d'auto-analyse du projet à tout moment. Les FAQ sont résumées dans [le wiki](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Méthode d'installation](#installation).
|
20 |
>
|
21 |
> 3. Ce projet est compatible avec et encourage l'utilisation de grands modèles de langage nationaux tels que chatglm, RWKV, Pangu, etc. La coexistence de plusieurs clés API est prise en charge et peut être remplie dans le fichier de configuration, tel que `API_KEY="openai-key1,openai-key2,api2d-key3"`. Lorsque vous souhaitez remplacer temporairement `API_KEY`, saisissez temporairement `API_KEY` dans la zone de saisie, puis appuyez sur Entrée pour soumettre et activer.
|
22 |
|
|
|
28 |
Traduction chinois-anglais en un clic | Traduction chinois-anglais en un clic
|
29 |
Explication de code en un clic | Affichage, explication, génération et ajout de commentaires de code
|
30 |
[Raccourcis personnalisés](https://www.bilibili.com/video/BV14s4y1E7jN) | prend en charge les raccourcis personnalisés
|
31 |
+
Conception modulaire | prend en charge de puissants plugins de fonction personnalisée, les plugins prennent en charge la [mise à jour à chaud](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
32 |
+
[Autoscanner](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] [Compréhension instantanée](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) du code source de ce projet
|
33 |
[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] Analyse en un clic de la structure d'autres projets Python / C / C ++ / Java / Lua / ...
|
34 |
Lecture d'articles, [traduction](https://www.bilibili.com/video/BV1KT411x7Wn) d'articles | [Plug-in de fonction] Compréhension instantanée de l'article latex / pdf complet et génération de résumés
|
35 |
[Traduction](https://www.bilibili.com/video/BV1nk4y1Y7Js/) et [révision](https://www.bilibili.com/video/BV1FT411H7c5/) complets en latex | [Plug-in de fonction] traduction ou révision en un clic d'articles en latex
|
36 |
Génération de commentaires en masse | [Plug-in de fonction] Génération en un clic de commentaires de fonction en masse
|
37 |
+
Traduction [chinois-anglais](https://www.bilibili.com/video/BV1yo4y157jV/) en Markdown | [Plug-in de fonction] avez-vous vu la [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) pour les 5 langues ci-dessus?
|
38 |
Génération de rapports d'analyse de chat | [Plug-in de fonction] Génère automatiquement un rapport de résumé après l'exécution
|
39 |
[Traduction intégrale en pdf](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plug-in de fonction] Extraction de titre et de résumé de l'article pdf + traduction intégrale (multi-thread)
|
40 |
[Aide à arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plug-in de fonction] Entrer l'url de l'article arxiv pour traduire et télécharger le résumé en un clic
|
|
|
84 |
|
85 |
1. Télécharger le projet
|
86 |
```sh
|
87 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
88 |
+
cd gpt_academic
|
89 |
```
|
90 |
|
91 |
2. Configuration de la clé API
|
|
|
141 |
1. ChatGPT uniquement (recommandé pour la plupart des gens)
|
142 |
|
143 |
``` sh
|
144 |
+
git clone https://github.com/binary-husky/gpt_academic.git # Télécharger le projet
|
145 |
+
cd gpt_academic # Accéder au chemin
|
146 |
nano config.py # Editez config.py avec n'importe quel éditeur de texte en configurant "Proxy", "API_KEY" et "WEB_PORT" (p. ex. 50923)
|
147 |
docker build -t gpt-academic . # Installer
|
148 |
|
|
|
172 |
Configurez simplement API_URL_REDIRECT selon les instructions de config.py.
|
173 |
|
174 |
2. Déploiement distant sur un serveur cloud (connaissance et expérience des serveurs cloud requises)
|
175 |
+
Veuillez consulter [Wiki de déploiement-1] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97).
|
176 |
|
177 |
3. Utilisation de WSL2 (sous-système Windows pour Linux)
|
178 |
+
Veuillez consulter [Wiki de déploiement-2] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2).
|
179 |
|
180 |
4. Comment exécuter sous un sous-répertoire (tel que `http://localhost/subpath`)
|
181 |
Veuillez consulter les [instructions d'exécution de FastAPI] (docs/WithFastapi.md).
|
|
|
206 |
|
207 |
Écrivez des plugins de fonctions puissants pour effectuer toutes les tâches que vous souhaitez ou que vous ne pouvez pas imaginer.
|
208 |
Les plugins de ce projet ont une difficulté de programmation et de débogage très faible. Si vous avez des connaissances de base en Python, vous pouvez simuler la fonctionnalité de votre propre plugin en suivant le modèle que nous avons fourni.
|
209 |
+
Veuillez consulter le [Guide du plugin de fonction] (https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) pour plus de détails.
|
210 |
|
211 |
---
|
212 |
# Latest Update
|
docs/README_JP.md
CHANGED
@@ -16,7 +16,7 @@ GPTを使った任意の言語にこのプロジェクトを翻訳するには
|
|
16 |
>
|
17 |
> 1. **赤色**で表示された関数プラグイン(ボタン)のみ、ファイルの読み取りをサポートしています。一部のプラグインは、プラグインエリアの**ドロップダウンメニュー**内にあります。また、私たちはどんな新しいプラグインのPRでも、**最優先**で歓迎し、処理します!
|
18 |
>
|
19 |
-
> 2. このプロジェクトの各ファイルの機能は、自己解析の詳細説明書である[`self_analysis.md`](https://github.com/binary-husky/
|
20 |
|
21 |
> 3. このプロジェクトは、chatglmやRWKV、パンクなど、国内の大規模自然言語モデルを利用することをサポートし、試みることを奨励します。複数のAPIキーを共存することができ、設定ファイルに`API_KEY="openai-key1,openai-key2,api2d-key3"`のように記入することができます。`API_KEY`を一時的に変更する場合は、入力エリアに一時的な`API_KEY`を入力してEnterキーを押せば、それが有効になります。
|
22 |
|
@@ -29,13 +29,13 @@ GPTを使った任意の言語にこのプロジェクトを翻訳するには
|
|
29 |
一键中英翻訳 | 一键で中英翻訳可能
|
30 |
一键コード解説 | コードを表示し、解説し、生成し、コードに注釈をつけることができる
|
31 |
[自分でカスタマイズ可能なショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | 自分でカスタマイズ可能なショートカットキーをサポートする
|
32 |
-
モジュール化された設計 | カスタマイズ可能な[強力な関数プラグイン](https://github.com/binary-husky/
|
33 |
-
[自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] [一键読解](https://github.com/binary-husky/
|
34 |
プログラム解析 | [関数プラグイン] 一鍵で他のPython/C/C++/Java/Lua/...プロジェクトを分析できる
|
35 |
論文の読み、[翻訳](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] LaTex/ PDF論文の全文を一鍵で読み解き、要約を生成することができる
|
36 |
LaTex全文[翻訳](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[校正](https://www.bilibili.com/video/BV1FT411H7c5/) | [関数プラグイン] LaTex論文の翻訳または校正を一鍵で行うことができる
|
37 |
一括で注釈を生成 | [関数プラグイン] 一鍵で関数に注釈をつけることができる
|
38 |
-
Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数プラグイン] 上記の5種類の言語の[README](https://github.com/binary-husky/
|
39 |
チャット分析レポート生成 | [関数プラグイン] 実行後、自動的に概要報告書を生成する
|
40 |
[PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文からタイトルと要約を抽出し、全文を翻訳する(マルチスレッド)
|
41 |
[Arxivアシスタント](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] arxiv記事のURLを入力するだけで、要約を一鍵翻訳し、PDFをダウンロードできる
|
@@ -43,7 +43,7 @@ Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数
|
|
43 |
インターネット情報収集+GPT | [関数プラグイン] まずGPTに[インターネットから情報を収集](https://www.bilibili.com/video/BV1om4y127ck)してから質問に回答させ、情報が常に最新であるようにする
|
44 |
数式/画像/表表示 | 数式の[tex形式とレンダリング形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)を同時に表示し、数式、コードハイライトをサポートしている
|
45 |
マルチスレッド関数プラグインがサポートされている | chatgptをマルチスレッドで呼び出し、[大量のテキスト](https://www.bilibili.com/video/BV1FT411H7c5/)またはプログラムを一鍵で処理できる
|
46 |
-
ダークグラジオ[テーマの起動](https://github.com/binary-husky/
|
47 |
[多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)がサポートされ、[API2D](https://api2d.com/)がサポートされている | 同時にGPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[復旦MOSS](https://github.com/OpenLMLab/MOSS)に対応
|
48 |
より多くのLLMモデルが接続され、[huggingfaceデプロイ](https://huggingface.co/spaces/qingxu98/gpt-academic)がサポートされている | Newbingインターフェイス(Newbing)、清華大学の[Jittorllm](https://github.com/Jittor/JittorLLMs)のサポート[LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV)と[盘古α](https://openi.org.cn/pangu/)
|
49 |
さらに多くの新機能(画像生成など)を紹介する... | この文書の最後に示す...
|
@@ -92,8 +92,8 @@ Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数
|
|
92 |
1. Download the project.
|
93 |
|
94 |
```sh
|
95 |
-
git clone https://github.com/binary-husky/
|
96 |
-
cd
|
97 |
```
|
98 |
|
99 |
2. Configure the API_KEY.
|
@@ -151,8 +151,8 @@ python main.py
|
|
151 |
1. Only ChatGPT (recommended for most people)
|
152 |
|
153 |
``` sh
|
154 |
-
git clone https://github.com/binary-husky/
|
155 |
-
cd
|
156 |
nano config.py # Edit config.py with any text editor ‑ configure "Proxy," "API_KEY," "WEB_PORT" (e.g., 50923) and more
|
157 |
docker build -t gpt-academic . # installation
|
158 |
|
@@ -182,10 +182,10 @@ docker-compose up
|
|
182 |
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
183 |
|
184 |
2. Remote Cloud Server Deployment (requires cloud server knowledge and experience)
|
185 |
-
Please visit [Deployment Wiki-1](https://github.com/binary-husky/
|
186 |
|
187 |
3. Using WSL2 (Windows Subsystem for Linux Subsystem)
|
188 |
-
Please visit [Deployment Wiki-2](https://github.com/binary-husky/
|
189 |
|
190 |
4. How to run on a secondary URL (such as `http://localhost/subpath`)
|
191 |
Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
|
@@ -216,7 +216,7 @@ example:
|
|
216 |
|
217 |
Write powerful function plugins to perform any task you can and cannot think of.
|
218 |
The difficulty of writing and debugging plugins in this project is low, and as long as you have a certain amount of python basic knowledge, you can follow the template provided by us to achieve your own plugin functions.
|
219 |
-
For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/
|
220 |
|
221 |
---
|
222 |
# Latest Update
|
|
|
16 |
>
|
17 |
> 1. **赤色**で表示された関数プラグイン(ボタン)のみ、ファイルの読み取りをサポートしています。一部のプラグインは、プラグインエリアの**ドロップダウンメニュー**内にあります。また、私たちはどんな新しいプラグインのPRでも、**最優先**で歓迎し、処理します!
|
18 |
>
|
19 |
+
> 2. このプロジェクトの各ファイルの機能は、自己解析の詳細説明書である[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)で説明されています。バージョンが進化するにつれて、関連する関数プラグインをいつでもクリックし、GPTを呼び出してプロジェクトの自己解析レポートを再生成することができます。よくある問題は[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)にまとめられています。[インストール方法](#installation)。
|
20 |
|
21 |
> 3. このプロジェクトは、chatglmやRWKV、パンクなど、国内の大規模自然言語モデルを利用することをサポートし、試みることを奨励します。複数のAPIキーを共存することができ、設定ファイルに`API_KEY="openai-key1,openai-key2,api2d-key3"`のように記入することができます。`API_KEY`を一時的に変更する場合は、入力エリアに一時的な`API_KEY`を入力してEnterキーを押せば、それが有効になります。
|
22 |
|
|
|
29 |
一键中英翻訳 | 一键で中英翻訳可能
|
30 |
一键コード解説 | コードを表示し、解説し、生成し、コードに注釈をつけることができる
|
31 |
[自分でカスタマイズ可能なショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | 自分でカスタマイズ可能なショートカットキーをサポートする
|
32 |
+
モジュール化された設計 | カスタマイズ可能な[強力な関数プラグイン](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions)をサポートし、プラグインは[ホットアップデート](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)に対応している
|
33 |
+
[自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] [一键読解](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)このプロジェクトのソースコード
|
34 |
プログラム解析 | [関数プラグイン] 一鍵で他のPython/C/C++/Java/Lua/...プロジェクトを分析できる
|
35 |
論文の読み、[翻訳](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] LaTex/ PDF論文の全文を一鍵で読み解き、要約を生成することができる
|
36 |
LaTex全文[翻訳](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[校正](https://www.bilibili.com/video/BV1FT411H7c5/) | [関数プラグイン] LaTex論文の翻訳または校正を一鍵で行うことができる
|
37 |
一括で注釈を生成 | [関数プラグイン] 一鍵で関数に注釈をつけることができる
|
38 |
+
Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数プラグイン] 上記の5種類の言語の[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)を見たことがありますか?
|
39 |
チャット分析レポート生成 | [関数プラグイン] 実行後、自動的に概要報告書を生成する
|
40 |
[PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文からタイトルと要約を抽出し、全文を翻訳する(マルチスレッド)
|
41 |
[Arxivアシスタント](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] arxiv記事のURLを入力するだけで、要約を一鍵翻訳し、PDFをダウンロードできる
|
|
|
43 |
インターネット情報収集+GPT | [関数プラグイン] まずGPTに[インターネットから情報を収集](https://www.bilibili.com/video/BV1om4y127ck)してから質問に回答させ、情報が常に最新であるようにする
|
44 |
数式/画像/表表示 | 数式の[tex形式とレンダリング形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)を同時に表示し、数式、コードハイライトをサポートしている
|
45 |
マルチスレッド関数プラグインがサポートされている | chatgptをマルチスレッドで呼び出し、[大量のテキスト](https://www.bilibili.com/video/BV1FT411H7c5/)またはプログラムを一鍵で処理できる
|
46 |
+
ダークグラジオ[テーマの起動](https://github.com/binary-husky/gpt_academic/issues/173) | ブラウザのURLの後ろに```/?__theme=dark```を追加すると、ダークテーマを切り替えることができます。
|
47 |
[多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)がサポートされ、[API2D](https://api2d.com/)がサポートされている | 同時にGPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[復旦MOSS](https://github.com/OpenLMLab/MOSS)に対応
|
48 |
より多くのLLMモデルが接続され、[huggingfaceデプロイ](https://huggingface.co/spaces/qingxu98/gpt-academic)がサポートされている | Newbingインターフェイス(Newbing)、清華大学の[Jittorllm](https://github.com/Jittor/JittorLLMs)のサポート[LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV)と[盘古α](https://openi.org.cn/pangu/)
|
49 |
さらに多くの新機能(画像生成など)を紹介する... | この文書の最後に示す...
|
|
|
92 |
1. Download the project.
|
93 |
|
94 |
```sh
|
95 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
96 |
+
cd gpt_academic
|
97 |
```
|
98 |
|
99 |
2. Configure the API_KEY.
|
|
|
151 |
1. Only ChatGPT (recommended for most people)
|
152 |
|
153 |
``` sh
|
154 |
+
git clone https://github.com/binary-husky/gpt_academic.git # Download project
|
155 |
+
cd gpt_academic # Enter path
|
156 |
nano config.py # Edit config.py with any text editor ‑ configure "Proxy," "API_KEY," "WEB_PORT" (e.g., 50923) and more
|
157 |
docker build -t gpt-academic . # installation
|
158 |
|
|
|
182 |
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
183 |
|
184 |
2. Remote Cloud Server Deployment (requires cloud server knowledge and experience)
|
185 |
+
Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
186 |
|
187 |
3. Using WSL2 (Windows Subsystem for Linux Subsystem)
|
188 |
+
Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
189 |
|
190 |
4. How to run on a secondary URL (such as `http://localhost/subpath`)
|
191 |
Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
|
|
|
216 |
|
217 |
Write powerful function plugins to perform any task you can and cannot think of.
|
218 |
The difficulty of writing and debugging plugins in this project is low, and as long as you have a certain amount of python basic knowledge, you can follow the template provided by us to achieve your own plugin functions.
|
219 |
+
For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
220 |
|
221 |
---
|
222 |
# Latest Update
|
docs/README_RS.md
CHANGED
@@ -11,7 +11,7 @@
|
|
11 |
>
|
12 |
> 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов!
|
13 |
>
|
14 |
-
> 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/
|
15 |
>
|
16 |
> 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу.
|
17 |
|
@@ -33,13 +33,13 @@
|
|
33 |
Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский
|
34 |
Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода
|
35 |
[Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш
|
36 |
-
Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/
|
37 |
-
[Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/
|
38 |
[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева д��угих проектов Python/C/C++/Java/Lua/...
|
39 |
Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме
|
40 |
Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи
|
41 |
Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций
|
42 |
-
[Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/
|
43 |
Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение
|
44 |
Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность)
|
45 |
[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF
|
@@ -81,8 +81,8 @@
|
|
81 |
|
82 |
1. Download the project
|
83 |
```sh
|
84 |
-
git clone https://github.com/binary-husky/
|
85 |
-
cd
|
86 |
```
|
87 |
|
88 |
2. Configure API_KEY
|
@@ -138,8 +138,8 @@ python main.py
|
|
138 |
1. ChatGPT only (recommended for most people)
|
139 |
|
140 |
``` sh
|
141 |
-
git clone https://github.com/binary-husky/
|
142 |
-
cd
|
143 |
nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923)
|
144 |
docker build -t gpt-academic . # install
|
145 |
|
@@ -169,10 +169,10 @@ docker-compose up
|
|
169 |
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
170 |
|
171 |
2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers)
|
172 |
-
Please visit [Deployment Wiki-1](https://github.com/binary-husky/
|
173 |
|
174 |
3. Using WSL2 (Windows Subsystem for Linux subsystem)
|
175 |
-
Please visit [Deployment Wiki-2](https://github.com/binary-husky/
|
176 |
|
177 |
4. How to run at the secondary URL (such as `http://localhost/subpath`)
|
178 |
Please visit [FastAPI Operation Instructions](docs/WithFastapi.md)
|
@@ -204,7 +204,7 @@ For example:
|
|
204 |
|
205 |
Write powerful function plugins to perform any task you can and can't imagine.
|
206 |
The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide.
|
207 |
-
Please refer to the [Function Plugin Guide](https://github.com/binary-husky/
|
208 |
|
209 |
---
|
210 |
# Latest Update
|
|
|
11 |
>
|
12 |
> 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов!
|
13 |
>
|
14 |
+
> 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). С каждой итерацией выполнения версии вы можете в любое время вызвать повторное создание отчета о самоанализе этого проекта, щелкнув соответствующий функциональный плагин и вызвав GPT. Вопросы сборки описаны в [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Метод установки](#installation).
|
15 |
>
|
16 |
> 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу.
|
17 |
|
|
|
33 |
Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский
|
34 |
Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода
|
35 |
[Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш
|
36 |
+
Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/gpt_academic/wiki/Function-Plug-in-Guide)
|
37 |
+
[Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academicProject-Self-analysis-Report) исходного кода этого проекта
|
38 |
[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева д��угих проектов Python/C/C++/Java/Lua/...
|
39 |
Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме
|
40 |
Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи
|
41 |
Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций
|
42 |
+
[Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) для этих 5 языков?
|
43 |
Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение
|
44 |
Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность)
|
45 |
[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF
|
|
|
81 |
|
82 |
1. Download the project
|
83 |
```sh
|
84 |
+
git clone https://github.com/binary-husky/gpt_academic.git
|
85 |
+
cd gpt_academic
|
86 |
```
|
87 |
|
88 |
2. Configure API_KEY
|
|
|
138 |
1. ChatGPT only (recommended for most people)
|
139 |
|
140 |
``` sh
|
141 |
+
git clone https://github.com/binary-husky/gpt_academic.git # download the project
|
142 |
+
cd gpt_academic # enter the path
|
143 |
nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923)
|
144 |
docker build -t gpt-academic . # install
|
145 |
|
|
|
169 |
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
170 |
|
171 |
2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers)
|
172 |
+
Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
173 |
|
174 |
3. Using WSL2 (Windows Subsystem for Linux subsystem)
|
175 |
+
Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
176 |
|
177 |
4. How to run at the secondary URL (such as `http://localhost/subpath`)
|
178 |
Please visit [FastAPI Operation Instructions](docs/WithFastapi.md)
|
|
|
204 |
|
205 |
Write powerful function plugins to perform any task you can and can't imagine.
|
206 |
The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide.
|
207 |
+
Please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) for details.
|
208 |
|
209 |
---
|
210 |
# Latest Update
|
docs/translate_english.json
CHANGED
@@ -1667,5 +1667,499 @@
|
|
1667 |
"段音频的主要内容": "The main content of the segment audio is",
|
1668 |
"z$ 分别是空间直角坐标系中的三个坐标": "z$, respectively, are the three coordinates in the spatial rectangular coordinate system",
|
1669 |
"这个是怎么识别的呢我也不清楚": "I'm not sure how this is recognized",
|
1670 |
-
"从现在起": "From now on"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1671 |
}
|
|
|
1667 |
"段音频的主要内容": "The main content of the segment audio is",
|
1668 |
"z$ 分别是空间直角坐标系中的三个坐标": "z$, respectively, are the three coordinates in the spatial rectangular coordinate system",
|
1669 |
"这个是怎么识别的呢我也不清楚": "I'm not sure how this is recognized",
|
1670 |
+
"从现在起": "From now on",
|
1671 |
+
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
1672 |
+
"联网的ChatGPT_bing版": "OnlineChatGPT_BingEdition",
|
1673 |
+
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
1674 |
+
"Langchain知识库": "LangchainKnowledgeBase",
|
1675 |
+
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
|
1676 |
+
"Latex输出PDF结果": "OutputPDFFromLatex",
|
1677 |
+
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
|
1678 |
+
"sprint亮靛": "SprintIndigo",
|
1679 |
+
"寻找Latex主文件": "FindLatexMainFile",
|
1680 |
+
"专业词汇声明": "ProfessionalTerminologyDeclaration",
|
1681 |
+
"Latex精细分解与转化": "DecomposeAndConvertLatex",
|
1682 |
+
"编译Latex": "CompileLatex",
|
1683 |
+
"如果您是论文原作者": "If you are the original author of the paper",
|
1684 |
+
"正在编译对比PDF": "Compiling the comparison PDF",
|
1685 |
+
"将 \\include 命令转换为 \\input 命令": "Converting the \\include command to the \\input command",
|
1686 |
+
"取评分最高者返回": "Returning the highest-rated one",
|
1687 |
+
"不要修改!! 高危设置!通过修改此设置": "Do not modify!! High-risk setting! By modifying this setting",
|
1688 |
+
"Tex源文件缺失!": "Tex source file is missing!",
|
1689 |
+
"6.25 加入判定latex模板的代码": "Added code to determine the latex template on June 25",
|
1690 |
+
"正在精细切分latex文件": "Finely splitting the latex file",
|
1691 |
+
"获取response失败": "Failed to get response",
|
1692 |
+
"手动指定语言": "Manually specify the language",
|
1693 |
+
"输入arxivID": "Enter arxivID",
|
1694 |
+
"对输入的word文档进行摘要生成": "Generate a summary of the input word document",
|
1695 |
+
"将指定目录下的PDF文件从英文翻译成中文": "Translate PDF files from English to Chinese in the specified directory",
|
1696 |
+
"如果分析错误": "If the analysis is incorrect",
|
1697 |
+
"尝试第": "Try the",
|
1698 |
+
"用户填3": "User fills in 3",
|
1699 |
+
"请在此处追加更细致的矫错指令": "Please append more detailed correction instructions here",
|
1700 |
+
"为了防止大语言模型的意外谬误产生扩散影响": "To prevent the accidental spread of errors in large language models",
|
1701 |
+
"前面是中文冒号": "The colon before is in Chinese",
|
1702 |
+
"内含已经翻译的Tex文档": "Contains a Tex document that has been translated",
|
1703 |
+
"成功啦": "Success!",
|
1704 |
+
"刷新页面即可以退出UpdateKnowledgeArchive模式": "Refresh the page to exit UpdateKnowledgeArchive mode",
|
1705 |
+
"或者不在环境变量PATH中": "Or not in the environment variable PATH",
|
1706 |
+
"--读取文件": "--Read the file",
|
1707 |
+
"才能继续下面的步骤": "To continue with the next steps",
|
1708 |
+
"代理数据解析失败": "Proxy data parsing failed",
|
1709 |
+
"详见项目主README.md": "See the main README.md of the project for details",
|
1710 |
+
"临时存储用于调试": "Temporarily stored for debugging",
|
1711 |
+
"屏蔽空行和太短的句子": "Filter out empty lines and sentences that are too short",
|
1712 |
+
"gpt 多线程请求": "GPT multi-threaded request",
|
1713 |
+
"编译已经开始": "Compilation has started",
|
1714 |
+
"无法找到一个主Tex文件": "Cannot find a main Tex file",
|
1715 |
+
"修复括号": "Fix parentheses",
|
1716 |
+
"请您不要删除或修改这行警告": "Please do not delete or modify this warning",
|
1717 |
+
"请登录OpenAI查看详情 https": "Please log in to OpenAI to view details at https",
|
1718 |
+
"调用函数": "Call a function",
|
1719 |
+
"请查看终端的输出或耐心等待": "Please check the output in the terminal or wait patiently",
|
1720 |
+
"LatexEnglishCorrection+高亮修正位置": "Latex English correction + highlight correction position",
|
1721 |
+
"行": "line",
|
1722 |
+
"Newbing 请求失败": "Newbing request failed",
|
1723 |
+
"转化PDF编译是否成功": "Check if the conversion to PDF and compilation were successful",
|
1724 |
+
"建议更换代理协议": "Recommend changing the proxy protocol",
|
1725 |
+
"========================================= 插件主程序1 =====================================================": "========================================= Plugin Main Program 1 =====================================================",
|
1726 |
+
"终端": "terminal",
|
1727 |
+
"请先上传文件素材": "Please upload file materials first",
|
1728 |
+
"前面是中文逗号": "There is a Chinese comma in front",
|
1729 |
+
"请尝试把以下指令复制到高级参数区": "Please try copying the following instructions to the advanced parameters section",
|
1730 |
+
"翻译-": "Translation -",
|
1731 |
+
"请耐心等待": "Please be patient",
|
1732 |
+
"将前后断行符脱离": "Remove line breaks before and after",
|
1733 |
+
"json等": "JSON, etc.",
|
1734 |
+
"生成中文PDF": "Generate Chinese PDF",
|
1735 |
+
"用红色标注处保留区": "Use red color to highlight the reserved area",
|
1736 |
+
"对比PDF编译是否成功": "Compare if the PDF compilation was successful",
|
1737 |
+
"回答完问题后": "After answering the question",
|
1738 |
+
"其他操作系统表现未知": "Unknown performance on other operating systems",
|
1739 |
+
"-构建知识库": "Build knowledge base",
|
1740 |
+
"还原原文": "Restore original text",
|
1741 |
+
"或者重启之后再度尝试": "Or try again after restarting",
|
1742 |
+
"免费": "Free",
|
1743 |
+
"仅在Windows系统进行了测试": "Tested only on Windows system",
|
1744 |
+
"欢迎加REAME中的QQ联系开发者": "Feel free to contact the developer via QQ in REAME",
|
1745 |
+
"当前知识库内的有效文件": "Valid files in the current knowledge base",
|
1746 |
+
"您可以到Github Issue区": "You can go to the Github Issue area",
|
1747 |
+
"刷新Gradio前端界面": "Refresh the Gradio frontend interface",
|
1748 |
+
"吸收title与作者以上的部分": "Include the title and the above part of the author",
|
1749 |
+
"给出一些判定模板文档的词作为扣分项": "Provide some words in the template document as deduction items",
|
1750 |
+
"--读取参数": "-- Read parameters",
|
1751 |
+
"然后进行问答": "And then perform question-answering",
|
1752 |
+
"根据自然语言执行插件命令": "Execute plugin commands based on natural language",
|
1753 |
+
"*{\\scriptsize\\textbf{警告": "*{\\scriptsize\\textbf{Warning",
|
1754 |
+
"但请查收结果": "But please check the results",
|
1755 |
+
"翻译内容可靠性无保障": "No guarantee of translation accuracy",
|
1756 |
+
"寻找主文件": "Find the main file",
|
1757 |
+
"消耗时间的函数": "Time-consuming function",
|
1758 |
+
"当前语言模型温度设定": "Current language model temperature setting",
|
1759 |
+
"这需要一段时间计算": "This requires some time to calculate",
|
1760 |
+
"为啥chatgpt会把cite里面的逗号换成中文逗号呀": "Why does ChatGPT change commas inside 'cite' to Chinese commas?",
|
1761 |
+
"发现已经存在翻译好的PDF文档": "Found an already translated PDF document",
|
1762 |
+
"待提取的知识库名称id": "Knowledge base name ID to be extracted",
|
1763 |
+
"文本碎片重组为完整的tex片段": "Reassemble text fragments into complete tex fragments",
|
1764 |
+
"注意事项": "Notes",
|
1765 |
+
"参数说明": "Parameter description",
|
1766 |
+
"或代理节点": "Or proxy node",
|
1767 |
+
"构建知识库": "Building knowledge base",
|
1768 |
+
"报错信息如下. 如果是与网络相关的问题": "Error message as follows. If it is related to network issues",
|
1769 |
+
"功能描述": "Function description",
|
1770 |
+
"禁止移除或修改此警告": "Removal or modification of this warning is prohibited",
|
1771 |
+
"Arixv翻译": "Arixv translation",
|
1772 |
+
"读取优先级": "Read priority",
|
1773 |
+
"包含documentclass关键字": "Contains the documentclass keyword",
|
1774 |
+
"根据文本使用GPT模型生成相应的图像": "Generate corresponding images using GPT model based on the text",
|
1775 |
+
"图像生成所用到的提示文本": "Prompt text used for image generation",
|
1776 |
+
"Your account is not active. OpenAI以账户失效为由": "Your account is not active. OpenAI states that it is due to account expiration",
|
1777 |
+
"快捷的调试函数": "Convenient debugging function",
|
1778 |
+
"在多Tex文档中": "In multiple Tex documents",
|
1779 |
+
"因此选择GenerateImage函数": "Therefore, choose the GenerateImage function",
|
1780 |
+
"当前工作路径为": "The current working directory is",
|
1781 |
+
"实际得到格式": "Obtained format in reality",
|
1782 |
+
"这段代码定义了一个名为TempProxy的空上下文管理器": "This code defines an empty context manager named TempProxy",
|
1783 |
+
"吸收其他杂项": "Absorb other miscellaneous items",
|
1784 |
+
"请输入要翻译成哪种语言": "Please enter which language to translate into",
|
1785 |
+
"的单词": "of the word",
|
1786 |
+
"正在尝试自动安装": "Attempting automatic installation",
|
1787 |
+
"如果有必要": "If necessary",
|
1788 |
+
"开始下载": "Start downloading",
|
1789 |
+
"项目Github地址 \\url{https": "Project GitHub address \\url{https",
|
1790 |
+
"将根据报错信息修正tex源文件并重试": "The Tex source file will be corrected and retried based on the error message",
|
1791 |
+
"发送至azure openai api": "Send to Azure OpenAI API",
|
1792 |
+
"吸收匿名公式": "Absorb anonymous formulas",
|
1793 |
+
"用该压缩包+ConversationHistoryArchive进行反馈": "Provide feedback using the compressed package + ConversationHistoryArchive",
|
1794 |
+
"需要特殊依赖": "Requires special dependencies",
|
1795 |
+
"还原部分原文": "Restore part of the original text",
|
1796 |
+
"构建完成": "Build completed",
|
1797 |
+
"解析arxiv网址失败": "Failed to parse arXiv URL",
|
1798 |
+
"输入问题后点击该插件": "Click the plugin after entering the question",
|
1799 |
+
"请求子进程": "Requesting subprocess",
|
1800 |
+
"请务必用 pip install -r requirements.txt 指令安装依赖": "Please make sure to install the dependencies using the 'pip install -r requirements.txt' command",
|
1801 |
+
"如果程序停顿5分钟以上": "If the program pauses for more than 5 minutes",
|
1802 |
+
"转化PDF编译已经成功": "Conversion to PDF compilation was successful",
|
1803 |
+
"虽然PDF生成失败了": "Although PDF generation failed",
|
1804 |
+
"分析上述回答": "Analyze the above answer",
|
1805 |
+
"吸收在42行以内的begin-end组合": "Absorb the begin-end combination within 42 lines",
|
1806 |
+
"推荐http": "Recommend http",
|
1807 |
+
"Latex没有安装": "Latex is not installed",
|
1808 |
+
"用latex编译为PDF对修正处做高亮": "Compile to PDF using LaTeX and highlight the corrections",
|
1809 |
+
"reverse 操作必须放在最后": "'reverse' operation must be placed at the end",
|
1810 |
+
"AZURE OPENAI API拒绝了请求": "AZURE OPENAI API rejected the request",
|
1811 |
+
"该项目的Latex主文件是": "The main LaTeX file of this project is",
|
1812 |
+
"You are associated with a deactivated account. OpenAI以账户失效为由": "You are associated with a deactivated account. OpenAI considers it as an account expiration",
|
1813 |
+
"它*必须*被包含在AVAIL_LLM_MODELS列表中": "It *must* be included in the AVAIL_LLM_MODELS list",
|
1814 |
+
"未知指令": "Unknown command",
|
1815 |
+
"尝试执行Latex指令失败": "Failed to execute the LaTeX command",
|
1816 |
+
"摘要生成后的文档路径": "Path of the document after summary generation",
|
1817 |
+
"GPT结果已输出": "GPT result has been outputted",
|
1818 |
+
"使用Newbing": "Using Newbing",
|
1819 |
+
"其他模型转化效果未知": "Unknown conversion effect of other models",
|
1820 |
+
"P.S. 但愿没人把latex模板放在里面传进来": "P.S. Hopefully, no one passes a LaTeX template in it",
|
1821 |
+
"定位主Latex文件": "Locate the main LaTeX file",
|
1822 |
+
"后面是英文冒号": "English colon follows",
|
1823 |
+
"文档越长耗时越长": "The longer the document, the longer it takes.",
|
1824 |
+
"压缩包": "Compressed file",
|
1825 |
+
"但通常不会出现在正文": "But usually does not appear in the body.",
|
1826 |
+
"正在预热文本向量化模组": "Preheating text vectorization module",
|
1827 |
+
"5刀": "5 dollars",
|
1828 |
+
"提问吧! 但注意": "Ask questions! But be careful",
|
1829 |
+
"发送至AZURE OPENAI API": "Send to AZURE OPENAI API",
|
1830 |
+
"请仔细鉴别并以原文为准": "Please carefully verify and refer to the original text",
|
1831 |
+
"如果需要使用AZURE 详情请见额外文档 docs\\use_azure.md": "If you need to use AZURE, please refer to the additional document docs\\use_azure.md for details",
|
1832 |
+
"使用正则表达式查找半行注释": "Use regular expressions to find inline comments",
|
1833 |
+
"只有第二步成功": "Only the second step is successful",
|
1834 |
+
"P.S. 顺便把CTEX塞进去以支持中文": "P.S. By the way, include CTEX to support Chinese",
|
1835 |
+
"安装方法https": "Installation method: https",
|
1836 |
+
"则跳过GPT请求环节": "Then skip the GPT request process",
|
1837 |
+
"请切换至“UpdateKnowledgeArchive”插件进行知识库访问": "Please switch to the 'UpdateKnowledgeArchive' plugin for knowledge base access",
|
1838 |
+
"=================================== 工具函数 ===============================================": "=================================== Utility functions ===============================================",
|
1839 |
+
"填入azure openai api的密钥": "Fill in the Azure OpenAI API key",
|
1840 |
+
"上传Latex压缩包": "Upload LaTeX compressed file",
|
1841 |
+
"远程云服务器部署": "Deploy to remote cloud server",
|
1842 |
+
"用黑色标注转换区": "Use black color to annotate the conversion area",
|
1843 |
+
"音频文件的路径": "Path to the audio file",
|
1844 |
+
"必须包含documentclass": "Must include documentclass",
|
1845 |
+
"再列出用户可能提出的三个问题": "List three more questions that the user might ask",
|
1846 |
+
"根据需要切换prompt": "Switch the prompt as needed",
|
1847 |
+
"将文件复制一份到下载区": "Make a copy of the file in the download area",
|
1848 |
+
"次编译": "Second compilation",
|
1849 |
+
"Latex文件融合完成": "LaTeX file merging completed",
|
1850 |
+
"返回": "Return",
|
1851 |
+
"后面是英文逗号": "Comma after this",
|
1852 |
+
"对不同latex源文件扣分": "Deduct points for different LaTeX source files",
|
1853 |
+
"失败啦": "Failed",
|
1854 |
+
"编译BibTex": "Compile BibTeX",
|
1855 |
+
"Linux下必须使用Docker安装": "Must install using Docker on Linux",
|
1856 |
+
"报错信息": "Error message",
|
1857 |
+
"删除或修改歧义文件": "Delete or modify ambiguous files",
|
1858 |
+
"-预热文本向量化模组": "- Preheating text vectorization module",
|
1859 |
+
"将每次对话记录写入Markdown格式的文件中": "Write each conversation record into a file in Markdown format",
|
1860 |
+
"其他类型文献转化效果未知": "Unknown conversion effect for other types of literature",
|
1861 |
+
"获取线程锁": "Acquire thread lock",
|
1862 |
+
"使用英文": "Use English",
|
1863 |
+
"如果存在调试缓存文件": "If there is a debug cache file",
|
1864 |
+
"您需要首先调用构建知识库": "You need to call the knowledge base building first",
|
1865 |
+
"原始PDF编译是否成功": "Whether the original PDF compilation is successful",
|
1866 |
+
"生成 azure openai api请求": "Generate Azure OpenAI API requests",
|
1867 |
+
"正在编译PDF": "Compiling PDF",
|
1868 |
+
"仅调试": "Debug only",
|
1869 |
+
"========================================= 插件主程序2 =====================================================": "========================================= Plugin Main Program 2 =====================================================",
|
1870 |
+
"多线程翻译开始": "Multithreaded translation begins",
|
1871 |
+
"出问题了": "There is a problem",
|
1872 |
+
"版权归原文作者所有": "Copyright belongs to the original author",
|
1873 |
+
"当前大语言模型": "Current large language model",
|
1874 |
+
"目前对机器学习类文献转化效果最好": "Currently, the best conversion effect for machine learning literature",
|
1875 |
+
"这个paper有个input命令文件名大小写错误!": "This paper has an input command with a filename case error!",
|
1876 |
+
"期望格式例如": "Expected format, for example",
|
1877 |
+
"解决部分词汇翻译不准确的问题": "Resolve the issue of inaccurate translation for some terms",
|
1878 |
+
"待注入的知识库名称id": "Name/ID of the knowledge base to be injected",
|
1879 |
+
"精细切分latex文件": "Fine-grained segmentation of LaTeX files",
|
1880 |
+
"永远给定None": "Always given None",
|
1881 |
+
"work_folder = Latex预处理": "work_folder = LaTeX preprocessing",
|
1882 |
+
"请直接去该路径下取回翻译结果": "Please directly go to the path to retrieve the translation results",
|
1883 |
+
"寻找主tex文件": "Finding the main .tex file",
|
1884 |
+
"模型参数": "Model parameters",
|
1885 |
+
"返回找到的第一个": "Return the first one found",
|
1886 |
+
"编译转化后的PDF": "Compile the converted PDF",
|
1887 |
+
"\\SEAFILE_LOCALŅ03047\\我的资料库\\music\\Akie秋绘-未来轮廓.mp3": "\\SEAFILE_LOCALŅ03047\\My Library\\music\\Akie秋绘-未来轮廓.mp3",
|
1888 |
+
"拆分过长的latex片段": "Splitting overly long LaTeX fragments",
|
1889 |
+
"没有找到任何可读取文件": "No readable files found",
|
1890 |
+
"暗色模式 / 亮色模式": "Dark mode / Light mode",
|
1891 |
+
"检测到arxiv文档连接": "Detected arXiv document link",
|
1892 |
+
"此插件Windows支持最佳": "This plugin has best support for Windows",
|
1893 |
+
"from crazy_functions.虚空终端 import 终端": "from crazy_functions.null_terminal import Terminal",
|
1894 |
+
"本地论文翻译": "Local paper translation",
|
1895 |
+
"输出html调试文件": "Output HTML debugging file",
|
1896 |
+
"以下所有配置也都支持利用环境变量覆写": "All the following configurations can also be overridden using environment variables",
|
1897 |
+
"PDF文件所在的路径": "Path of the PDF file",
|
1898 |
+
"也是可读的": "It is also readable",
|
1899 |
+
"将消耗较长时间下载中文向量化模型": "Downloading Chinese vectorization model will take a long time",
|
1900 |
+
"环境变量配置格式见docker-compose.yml": "See docker-compose.yml for the format of environment variable configuration",
|
1901 |
+
"编译文献交叉引用": "Compile bibliographic cross-references",
|
1902 |
+
"默认为default": "Default is 'default'",
|
1903 |
+
"或者使用此插件继续上传更多文件": "Or use this plugin to continue uploading more files",
|
1904 |
+
"该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成": "This PDF is generated by the GPT-Academic open-source project using a large language model + LaTeX translation plugin",
|
1905 |
+
"使用latexdiff生成论文转化前后对比": "Use latexdiff to generate before and after comparison of paper transformation",
|
1906 |
+
"正在编译PDF文档": "Compiling PDF document",
|
1907 |
+
"读取config.py文件中关于AZURE OPENAI API的信息": "Read the information about AZURE OPENAI API from the config.py file",
|
1908 |
+
"配置教程&视频教程": "Configuration tutorial & video tutorial",
|
1909 |
+
"临时地启动代理网络": "Temporarily start proxy network",
|
1910 |
+
"临时地激活代理网络": "Temporarily activate proxy network",
|
1911 |
+
"功能尚不稳定": "Functionality is unstable",
|
1912 |
+
"默认为Chinese": "Default is Chinese",
|
1913 |
+
"请查收结果": "Please check the results",
|
1914 |
+
"将 chatglm 直接对齐到 chatglm2": "Align chatglm directly to chatglm2",
|
1915 |
+
"中读取数据构建知识库": "Build a knowledge base by reading data in",
|
1916 |
+
"用于给一小段代码上代理": "Used to proxy a small piece of code",
|
1917 |
+
"分析结果": "Analysis results",
|
1918 |
+
"依赖不足": "Insufficient dependencies",
|
1919 |
+
"Markdown翻译": "Markdown translation",
|
1920 |
+
"除非您是论文的原作者": "Unless you are the original author of the paper",
|
1921 |
+
"test_LangchainKnowledgeBase读取": "test_LangchainKnowledgeBase read",
|
1922 |
+
"将多文件tex工程融合为一个巨型tex": "Merge multiple tex projects into one giant tex",
|
1923 |
+
"吸收iffalse注释": "Absorb iffalser comments",
|
1924 |
+
"您接下来不能再使用其他插件了": "You can no longer use other plugins next",
|
1925 |
+
"正在构建知识库": "Building knowledge base",
|
1926 |
+
"需Latex": "Requires Latex",
|
1927 |
+
"即找不到": "That is not found",
|
1928 |
+
"保证括号正确": "Ensure parentheses are correct",
|
1929 |
+
"= 2 通过一些Latex模板中常见": "= 2 through some common Latex templates",
|
1930 |
+
"请立即终止程序": "Please terminate the program immediately",
|
1931 |
+
"解压失败! 需要安装pip install rarfile来解压rar文件": "Decompression failed! Install 'pip install rarfile' to decompress rar files",
|
1932 |
+
"请在此处给出自定义翻译命令": "Please provide custom translation command here",
|
1933 |
+
"解压失败! 需要安装pip install py7zr来解压7z文件": "Decompression failed! Install 'pip install py7zr' to decompress 7z files",
|
1934 |
+
"执行错误": "Execution error",
|
1935 |
+
"目前仅支持GPT3.5/GPT4": "Currently only supports GPT3.5/GPT4",
|
1936 |
+
"P.S. 顺便把Latex的注释去除": "P.S. Also remove comments from Latex",
|
1937 |
+
"写出文件": "Write out the file",
|
1938 |
+
"当前报错的latex代码处于第": "The current error in the LaTeX code is on line",
|
1939 |
+
"主程序即将开始": "Main program is about to start",
|
1940 |
+
"详情信息见requirements.txt": "See details in requirements.txt",
|
1941 |
+
"释放线程锁": "Release thread lock",
|
1942 |
+
"由于最为关键的转化PDF编译失败": "Due to the critical failure of PDF conversion and compilation",
|
1943 |
+
"即将退出": "Exiting soon",
|
1944 |
+
"尝试下载": "Attempting to download",
|
1945 |
+
"删除整行的空注释": "Remove empty comments from the entire line",
|
1946 |
+
"也找不到": "Not found either",
|
1947 |
+
"从一批文件": "From a batch of files",
|
1948 |
+
"编译结束": "Compilation finished",
|
1949 |
+
"调用缓存": "Calling cache",
|
1950 |
+
"只有GenerateImage和生成图像相关": "Only GenerateImage and image generation related",
|
1951 |
+
"待处理的word文档路径": "Path of the word document to be processed",
|
1952 |
+
"是否在提交时自动清空输入框": "Whether to automatically clear the input box upon submission",
|
1953 |
+
"检查结果": "Check the result",
|
1954 |
+
"生成时间戳": "Generate a timestamp",
|
1955 |
+
"编译原始PDF": "Compile the original PDF",
|
1956 |
+
"填入ENGINE": "Fill in ENGINE",
|
1957 |
+
"填入api版本": "Fill in the API version",
|
1958 |
+
"中文Bing版": "Chinese Bing version",
|
1959 |
+
"当前支持的格式包括": "Currently supported formats include",
|
1960 |
+
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
1961 |
+
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
1962 |
+
"语音助手": "VoiceAssistant",
|
1963 |
+
"微调数据集生成": "FineTuneDatasetGeneration",
|
1964 |
+
"chatglm微调工具": "ChatGLMFineTuningTool",
|
1965 |
+
"启动微调": "StartFineTuning",
|
1966 |
+
"请讲话": "Please speak",
|
1967 |
+
"正在听您讲话": "Listening to you",
|
1968 |
+
"对这个人外貌、身处的环境、内心世界、过去经历进行描写": "Describe the appearance, environment, inner world, and past experiences of this person",
|
1969 |
+
"请向下翻": "Please scroll down",
|
1970 |
+
"实时音频采集": "Real-time audio collection",
|
1971 |
+
"找不到": "Not found",
|
1972 |
+
"在一个异步线程中采集音频": "Collect audio in an asynchronous thread",
|
1973 |
+
"azure和api2d请求源": "Azure and API2D request source",
|
1974 |
+
"等待ChatGLMFT响应中": "Waiting for ChatGLMFT response",
|
1975 |
+
"如果使用ChatGLM2微调模型": "If using ChatGLM2 fine-tuning model",
|
1976 |
+
"把文件复制过去": "Copy the file over",
|
1977 |
+
"可选": "Optional",
|
1978 |
+
"ChatGLMFT响应异常": "ChatGLMFT response exception",
|
1979 |
+
"上传本地文件/压缩包供函数插件调用": "Upload local files/compressed packages for function plugin calls",
|
1980 |
+
"例如 f37f30e0f9934c34a992f6f64f7eba4f": "For example, f37f30e0f9934c34a992f6f64f7eba4f",
|
1981 |
+
"正在等您说完问题": "Waiting for you to finish the question",
|
1982 |
+
"解除插件状态": "Release plugin status",
|
1983 |
+
"详情见https": "See details at https",
|
1984 |
+
"避免线程阻塞": "Avoid thread blocking",
|
1985 |
+
"先上传数据集": "Upload dataset first",
|
1986 |
+
"请直接提交即可": "Submit directly",
|
1987 |
+
"Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数": "Call ChatGLMFT fail, cannot load ChatGLMFT parameters",
|
1988 |
+
"插件可读取“输入区”文本/路径作为参数": "The plugin can read text/path in the input area as parameters",
|
1989 |
+
"给出指令": "Give instructions",
|
1990 |
+
"暂不提交": "Do not submit for now",
|
1991 |
+
"如 绿帽子*深蓝色衬衫*黑色运动裤": "E.g. green hat * dark blue shirt * black sports pants",
|
1992 |
+
"阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https": "Aliyun real-time speech recognition has high configuration difficulty and is only recommended for advanced users. Refer to https",
|
1993 |
+
"ChatGLMFT尚未加载": "ChatGLMFT has not been loaded yet",
|
1994 |
+
"输入 clear 以清空对话历史": "Enter 'clear' to clear the conversation history",
|
1995 |
+
"可以将自身的状态存储到cookie中": "You can store your own status in cookies",
|
1996 |
+
"填入你亲手写的部署名": "Fill in the deployment name you wrote by yourself",
|
1997 |
+
"该选项即将被弃用": "This option will be deprecated soon",
|
1998 |
+
"代理网络配置": "Proxy network configuration",
|
1999 |
+
"每秒采样数量": "Number of samples per second",
|
2000 |
+
"使用时": "When using",
|
2001 |
+
"想象一个穿着者": "Imagine a wearer",
|
2002 |
+
"如果已经存在": "If it already exists",
|
2003 |
+
"例如您可以将以下命令复制到下方": "For example, you can copy the following command below",
|
2004 |
+
"正在锁定插件": "Locking plugin",
|
2005 |
+
"使用": "Use",
|
2006 |
+
"读 docs\\use_azure.md": "Read docs\\use_azure.md",
|
2007 |
+
"开始最终总结": "Start final summary",
|
2008 |
+
"openai的官方KEY需要伴随组织编码": "Openai's official KEY needs to be accompanied by organizational code",
|
2009 |
+
"将子线程的gpt结果写入chatbot": "Write the GPT result of the sub-thread into the chatbot",
|
2010 |
+
"Arixv论文精细翻译": "Fine translation of Arixv paper",
|
2011 |
+
"开始接收chatglmft的回复": "Start receiving replies from chatglmft",
|
2012 |
+
"请先将.doc文档转换为.docx文档": "Please convert .doc documents to .docx documents first",
|
2013 |
+
"避免多用户干扰": "Avoid multiple user interference",
|
2014 |
+
"清空label": "Clear label",
|
2015 |
+
"解除插件锁定": "Unlock plugin",
|
2016 |
+
"请以以下方式load模型!!!": "Please load the model in the following way!!!",
|
2017 |
+
"没给定指令": "No instruction given",
|
2018 |
+
"100字以内": "Within 100 words",
|
2019 |
+
"获取关键词": "Get keywords",
|
2020 |
+
"欢迎使用 MOSS 人工智能助手!": "Welcome to use MOSS AI assistant!",
|
2021 |
+
"音频助手": "Audio assistant",
|
2022 |
+
"上传Latex项目": "Upload Latex project",
|
2023 |
+
"对话助手函数插件": "Chat assistant function plugin",
|
2024 |
+
"如果一句话小于7个字": "If a sentence is less than 7 words",
|
2025 |
+
"640个字节为一组": "640 bytes per group",
|
2026 |
+
"右下角更换模型菜单中可切换openai": "OpenAI can be switched in the model menu in the lower right corner",
|
2027 |
+
"双手离开鼠标键盘吧": "Take your hands off the mouse and keyboard",
|
2028 |
+
"先删除": "Delete first",
|
2029 |
+
"如果要使用ChatGLMFT": "If you want to use ChatGLMFT",
|
2030 |
+
"例如 RoPlZrM88DnAFkZK": "For example, RoPlZrM88DnAFkZK",
|
2031 |
+
"提取总结": "Extract summary",
|
2032 |
+
"ChatGLMFT消耗大量的内存": "ChatGLMFT consumes a lot of memory",
|
2033 |
+
"格式如org-123456789abcdefghijklmno的": "In the format of org-123456789abcdefghijklmno",
|
2034 |
+
"在执行完成之后": "After execution is complete",
|
2035 |
+
"此处填API密钥": "Fill in the API key here",
|
2036 |
+
"chatglmft 没有 sys_prompt 接口": "ChatGLMFT does not have a sys_prompt interface",
|
2037 |
+
"用第二人称": "Use the second person",
|
2038 |
+
"Chuanhu-Small-and-Beautiful主题": "Chuanhu-Small-and-Beautiful theme",
|
2039 |
+
"请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期": "Please check if ALIYUN_TOKEN and ALIYUN_APPKEY have expired",
|
2040 |
+
"还需要填写组织": "You also need to fill in the organization",
|
2041 |
+
"会直接转到该函数": "Will directly jump to the function",
|
2042 |
+
"初始化插件状态": "Initializing plugin status",
|
2043 |
+
"插件锁定中": "Plugin is locked",
|
2044 |
+
"如果这里报错": "If there is an error here",
|
2045 |
+
"本地Latex论文精细翻译": "Local Latex paper fine translation",
|
2046 |
+
"极少数情况下": "In very few cases",
|
2047 |
+
"首先你在中文语境下通读整篇论文": "First, read the entire paper in a Chinese context",
|
2048 |
+
"点击“停止”键可终止程序": "Click the 'Stop' button to terminate the program",
|
2049 |
+
"建议排查": "Suggested troubleshooting",
|
2050 |
+
"没有阿里云语音识别APPKEY和TOKEN": "No Aliyun voice recognition APPKEY and TOKEN",
|
2051 |
+
"避免遗忘导致死锁": "Avoid forgetting to cause deadlock",
|
2052 |
+
"第一次调用": "First call",
|
2053 |
+
"解决插件锁定时的界面显示问题": "Solve the interface display problem when the plugin is locked",
|
2054 |
+
"初始化音频采集线程": "Initialize audio capture thread",
|
2055 |
+
"找不到微调模型检查点": "Cannot find fine-tuning model checkpoint",
|
2056 |
+
"色彩主体": "Color theme",
|
2057 |
+
"上传文件自动修正路径": "Automatically correct the path when uploading files",
|
2058 |
+
"将文件添加到chatbot cookie中": "Add files to chatbot cookie",
|
2059 |
+
"正常状态": "Normal state",
|
2060 |
+
"建议使用英文单词": "Suggest using English words",
|
2061 |
+
"Aliyun音频服务异常": "Aliyun audio service exception",
|
2062 |
+
"格式如org-xxxxxxxxxxxxxxxxxxxxxxxx": "Format like org-xxxxxxxxxxxxxxxxxxxxxxxx",
|
2063 |
+
"GPT 学术优化": "GPT academic optimization",
|
2064 |
+
"要求": "Requirement",
|
2065 |
+
"赋予插件状态": "Assign plugin status",
|
2066 |
+
"等待GPT响应": "Waiting for GPT response",
|
2067 |
+
"MOSS can understand and communicate fluently in the language chosen by the user such as English and 中文. MOSS can perform any language-based tasks.": "MOSS can understand and communicate fluently in the language chosen by the user such as English and Chinese. MOSS can perform any language-based tasks.",
|
2068 |
+
"我将为您查找相关壁纸": "I will search for related wallpapers for you",
|
2069 |
+
"当下一次用户提交时": "When the next user submits",
|
2070 |
+
"赋予插件锁定 锁定插件回调路径": "Assign plugin lock, lock plugin callback path",
|
2071 |
+
"处理个别特殊插件的锁定状态": "Handle the lock status of individual special plugins",
|
2072 |
+
"add gpt task 创建子线程请求gpt": "Add GPT task, create sub-thread to request GPT",
|
2073 |
+
"等待用户的再次调用": "Waiting for the user to call again",
|
2074 |
+
"只读": "Read-only",
|
2075 |
+
"用于灵活调整复杂功能的各种参数": "Various parameters used to flexibly adjust complex functions",
|
2076 |
+
"输入 stop 以终止对话": "Enter stop to terminate the conversation",
|
2077 |
+
"缺少ChatGLMFT的依赖": "Missing dependency of ChatGLMFT",
|
2078 |
+
"找 API_ORG 设置项": "Find API_ORG setting item",
|
2079 |
+
"检查config中的AVAIL_LLM_MODELS选项": "Check the AVAIL_LLM_MODELS option in config",
|
2080 |
+
"对这个人外貌、身处的环境、内心世界、人设进行描写": "Describe the appearance, environment, inner world, and character of this person.",
|
2081 |
+
"请输入关键词": "Please enter a keyword.",
|
2082 |
+
"!!!如果需要运行量化版本": "!!! If you need to run the quantitative version.",
|
2083 |
+
"为每一位访问的用户赋予一个独一无二的uuid编码": "Assign a unique uuid code to each visiting user.",
|
2084 |
+
"由于提问含不合规内容被Azure过滤": "Due to Azure filtering out questions containing non-compliant content.",
|
2085 |
+
"欢迎使用 MOSS 人工智能助手!输入内容即可进行对话": "Welcome to use MOSS AI assistant! Enter the content to start the conversation.",
|
2086 |
+
"记住当前的label": "Remember the current label.",
|
2087 |
+
"不能正常加载ChatGLMFT的参数!": "Cannot load ChatGLMFT parameters normally!",
|
2088 |
+
"建议直接在API_KEY处填写": "It is recommended to fill in directly at API_KEY.",
|
2089 |
+
"创建request": "Create request",
|
2090 |
+
"默认 secondary": "Default secondary",
|
2091 |
+
"会被加在你的输入之前": "Will be added before your input",
|
2092 |
+
"缺少": "Missing",
|
2093 |
+
"前者是API2D的结束条件": "The former is the termination condition of API2D",
|
2094 |
+
"无需填写": "No need to fill in",
|
2095 |
+
"后缀": "Suffix",
|
2096 |
+
"扭转的范围": "Range of twisting",
|
2097 |
+
"是否在触发时清除历史": "Whether to clear history when triggered",
|
2098 |
+
"⭐多线程方法": "⭐Multi-threaded method",
|
2099 |
+
"消耗大量的内存": "Consumes a large amount of memory",
|
2100 |
+
"重组": "Reorganize",
|
2101 |
+
"高危设置! 常规情况下不要修改! 通过修改此设置": "High-risk setting! Do not modify under normal circumstances! Modify this setting",
|
2102 |
+
"检查USE_PROXY": "Check USE_PROXY",
|
2103 |
+
"标注节点的行数范围": "Range of line numbers for annotated nodes",
|
2104 |
+
"即不处理之前的对话历史": "That is, do not process previous conversation history",
|
2105 |
+
"即将编译PDF": "Compiling PDF",
|
2106 |
+
"没有设置ANTHROPIC_API_KEY选项": "ANTHROPIC_API_KEY option is not set",
|
2107 |
+
"非Openai官方接口返回了错误": "Non-Openai official interface returned an error",
|
2108 |
+
"您的 API_KEY 不满足任何一种已知的密钥格式": "Your API_KEY does not meet any known key format",
|
2109 |
+
"格式": "Format",
|
2110 |
+
"不能正常加载": "Cannot load properly",
|
2111 |
+
"🏃♂️🏃♂️🏃♂️ 子进程执行": "🏃♂️🏃♂️🏃♂️ Subprocess execution",
|
2112 |
+
"前缀": "Prefix",
|
2113 |
+
"创建AcsClient实例": "Create AcsClient instance",
|
2114 |
+
"⭐主进程执行": "⭐Main process execution",
|
2115 |
+
"增强稳健性": "Enhance robustness",
|
2116 |
+
"用来描述你的要求": "Used to describe your requirements",
|
2117 |
+
"举例": "For example",
|
2118 |
+
"⭐单线程方法": "⭐Single-threaded method",
|
2119 |
+
"后者是OPENAI的结束条件": "The latter is the termination condition of OPENAI",
|
2120 |
+
"防止proxies单独起作用": "Prevent proxies from working alone",
|
2121 |
+
"将两个PDF拼接": "Concatenate two PDFs",
|
2122 |
+
"最后一步处理": "The last step processing",
|
2123 |
+
"正在从github下载资源": "Downloading resources from github",
|
2124 |
+
"失败时": "When failed",
|
2125 |
+
"尚未加载": "Not loaded yet",
|
2126 |
+
"配合前缀可以把你的输入内容用引号圈起来": "With the prefix, you can enclose your input content in quotation marks",
|
2127 |
+
"我好!": "I'm good!",
|
2128 |
+
"默认 False": "Default False",
|
2129 |
+
"的依赖": "Dependencies of",
|
2130 |
+
"并设置参数": "and set parameters",
|
2131 |
+
"会被加在你的输入之后": "Will be added after your input",
|
2132 |
+
"安装": "Installation",
|
2133 |
+
"一个单实例装饰器": "Single instance decorator",
|
2134 |
+
"自定义API KEY格式": "Customize API KEY format",
|
2135 |
+
"的参数": "Parameters of",
|
2136 |
+
"api2d等请求源": "api2d and other request sources",
|
2137 |
+
"逆转出错的段落": "Reverse the wrong paragraph",
|
2138 |
+
"没有设置ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY is not set",
|
2139 |
+
"默认 True": "Default True",
|
2140 |
+
"本项目现已支持OpenAI和Azure的api-key": "This project now supports OpenAI and Azure's api-key",
|
2141 |
+
"即可见": "Visible immediately",
|
2142 |
+
"请问什么是质子": "What is a proton?",
|
2143 |
+
"按钮是否可见": "Is the button visible?",
|
2144 |
+
"调用": "Call",
|
2145 |
+
"如果要使用": "If you want to use",
|
2146 |
+
"的参数!": "parameters!",
|
2147 |
+
"例如翻译、解释代码、润色等等": "such as translation, code interpretation, polishing, etc.",
|
2148 |
+
"响应异常": "Response exception",
|
2149 |
+
"响应中": "Responding",
|
2150 |
+
"请尝试英文Prompt": "Try English Prompt",
|
2151 |
+
"在��行过程中动态地修改多个配置": "Dynamically modify multiple configurations during runtime",
|
2152 |
+
"无法调用相关功能": "Unable to invoke related functions",
|
2153 |
+
"接驳虚空终端": "Connect to Void Terminal",
|
2154 |
+
"虚空终端插件的功能": "Functionality of Void Terminal plugin",
|
2155 |
+
"执行任意插件的命令": "Execute commands of any plugin",
|
2156 |
+
"修改调用函数": "Modify calling function",
|
2157 |
+
"获取简单聊天的默认参数": "Get default parameters for simple chat",
|
2158 |
+
"根据自然语言的描述": "Based on natural language description",
|
2159 |
+
"获取插件的句柄": "Get handle of plugin",
|
2160 |
+
"第四部分": "Part Four",
|
2161 |
+
"在运行过程中动态地修改配置": "Dynamically modify configurations during runtime",
|
2162 |
+
"请先把模型切换至gpt-*或者api2d-*": "Please switch the model to gpt-* or api2d-* first",
|
2163 |
+
"获取简单聊天的句柄": "Get handle of simple chat",
|
2164 |
+
"获取插件的默认参数": "Get default parameters of plugin"
|
2165 |
}
|
docs/translate_japanese.json
CHANGED
@@ -939,7 +939,6 @@
|
|
939 |
"以下は学術論文の基本情報です": "以下は学術論文の基本情報です",
|
940 |
"出力が不完全になる原因となる": "出力が不完全になる原因となる",
|
941 |
"ハイフンを使って": "ハイフンを使って",
|
942 |
-
"シングルスレッド": "シングルスレッド",
|
943 |
"请先把模型切换至gpt-xxxx或者api2d-xxxx": "Please switch the model to gpt-xxxx or api2d-xxxx first.",
|
944 |
"路径或网址": "Path or URL",
|
945 |
"*代表通配符": "* represents a wildcard",
|
@@ -1484,5 +1483,632 @@
|
|
1484 |
"请提交新问题": "新しい問題を提出してください",
|
1485 |
"您正在调用一个": "あなたは呼び出しています",
|
1486 |
"请编辑以下文本": "以下のテキストを編集してください",
|
1487 |
-
"常见协议无非socks5h/http": "一般的なプロトコルはsocks5h/http以外ありません"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1488 |
}
|
|
|
939 |
"以下は学術論文の基本情報です": "以下は学術論文の基本情報です",
|
940 |
"出力が不完全になる原因となる": "出力が不完全になる原因となる",
|
941 |
"ハイフンを使って": "ハイフンを使って",
|
|
|
942 |
"请先把模型切换至gpt-xxxx或者api2d-xxxx": "Please switch the model to gpt-xxxx or api2d-xxxx first.",
|
943 |
"路径或网址": "Path or URL",
|
944 |
"*代表通配符": "* represents a wildcard",
|
|
|
1483 |
"请提交新问题": "新しい問題を提出してください",
|
1484 |
"您正在调用一个": "あなたは呼び出しています",
|
1485 |
"请编辑以下文本": "以下のテキストを編集してください",
|
1486 |
+
"常见协议无非socks5h/http": "一般的なプロトコルはsocks5h/http以外ありません",
|
1487 |
+
"Latex英文纠错": "LatexEnglishErrorCorrection",
|
1488 |
+
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
1489 |
+
"联网的ChatGPT_bing版": "OnlineChatGPT_BingVersion",
|
1490 |
+
"总结音视频": "SummarizeAudioVideo",
|
1491 |
+
"动画生成": "GenerateAnimation",
|
1492 |
+
"数学动画生成manim": "GenerateMathematicalAnimationManim",
|
1493 |
+
"Markdown翻译指定语言": "TranslateMarkdownSpecifiedLanguage",
|
1494 |
+
"知识库问答": "KnowledgeBaseQuestionAnswer",
|
1495 |
+
"Langchain知识库": "LangchainKnowledgeBase",
|
1496 |
+
"读取知识库作答": "ReadKnowledgeBaseAnswer",
|
1497 |
+
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
1498 |
+
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
1499 |
+
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
|
1500 |
+
"Latex输出PDF结果": "LatexOutputPDFResult",
|
1501 |
+
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
|
1502 |
+
"语音助手": "VoiceAssistant",
|
1503 |
+
"微调数据集生成": "FineTuneDatasetGeneration",
|
1504 |
+
"chatglm微调工具": "ChatGLMFineTuningTool",
|
1505 |
+
"启动微调": "StartFineTuning",
|
1506 |
+
"sprint亮靛": "SprintAzureIndigo",
|
1507 |
+
"专业词汇声明": "ProfessionalVocabularyDeclaration",
|
1508 |
+
"Latex精细分解与转化": "LatexDetailedDecompositionAndConversion",
|
1509 |
+
"编译Latex": "CompileLatex",
|
1510 |
+
"将代码转为动画": "コードをアニメーションに変換する",
|
1511 |
+
"解析arxiv网址失败": "arxivのURLの解析に失敗しました",
|
1512 |
+
"其他模型转化效果未知": "他のモデルの変換効果は不明です",
|
1513 |
+
"把文件复制过去": "ファイルをコピーする",
|
1514 |
+
"!!!如果需要运行量化版本": "!!!量子化バージョンを実行する必要がある場合",
|
1515 |
+
"报错信息如下. 如果是与网络相关的问题": "エラーメッセージは次のとおりです。ネットワークに関連する問題の場合",
|
1516 |
+
"请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期": "ALIYUN_TOKENとALIYUN_APPKEYの有効期限を確認してください",
|
1517 |
+
"编译结束": "コンパイル終了",
|
1518 |
+
"只读": "読み取り専用",
|
1519 |
+
"模型选择是": "モデルの選択は",
|
1520 |
+
"正在从github下载资源": "GitHubからリソースをダウンロードしています",
|
1521 |
+
"同时分解长句": "同時に長い文を分解する",
|
1522 |
+
"寻找主tex文件": "メインのtexファイルを検索する",
|
1523 |
+
"例如您可以将以下命令复制到下方": "たとえば、以下のコマンドを下にコピーできます",
|
1524 |
+
"使用中文总结音频“": "中国語で音声を要約する",
|
1525 |
+
"此处填API密钥": "ここにAPIキーを入力してください",
|
1526 |
+
"裁剪输入": "入力をトリミングする",
|
1527 |
+
"当前语言模型温度设定": "現在の言語モデルの温度設定",
|
1528 |
+
"history 是之前的对话列表": "historyは以前の対話リストです",
|
1529 |
+
"对输入的word文档进行摘要生成": "入力されたWord文書の要約を生成する",
|
1530 |
+
"输入问题后点击该插件": "質問を入力した後、このプラグインをクリックします",
|
1531 |
+
"仅在Windows系统进行了测试": "Windowsシステムでのみテストされています",
|
1532 |
+
"reverse 操作必须放在最后": "reverse操作は最後に配置する必要があります",
|
1533 |
+
"即将编译PDF": "PDFをコンパイルする予定です",
|
1534 |
+
"执行错误": "エラーが発生しました",
|
1535 |
+
"段音频完成了吗": "セグメントのオーディオは完了しましたか",
|
1536 |
+
"然后重启程序": "それからプログラムを再起動してください",
|
1537 |
+
"是所有LLM的通用接口": "これはすべてのLLMの共通インターフェースです",
|
1538 |
+
"当前报错的latex代码处于第": "現在のエラーのあるLaTeXコードは第",
|
1539 |
+
"🏃♂️🏃♂️🏃♂️ 子进程执行": "🏃♂️🏃♂️🏃♂️ サブプロセスの実行",
|
1540 |
+
"用来描述你的要求": "要求を説明するために使用されます",
|
1541 |
+
"原始PDF编译是否成功": "元のPDFのコンパイルは成功しましたか",
|
1542 |
+
"本地Latex论文精细翻译": "ローカルのLaTeX論文の詳細な翻訳",
|
1543 |
+
"设置OpenAI密钥和模型": "OpenAIキーとモデルの設定",
|
1544 |
+
"如果使用ChatGLM2微调模型": "ChatGLM2ファインチューニングモデルを使用する場合",
|
1545 |
+
"项目Github地址 \\url{https": "プロジェクトのGithubアドレス \\url{https",
|
1546 |
+
"将前后断行符脱离": "前後の改行文字を削除します",
|
1547 |
+
"该项目的Latex主文件是": "このプロジェクトのLaTeXメインファイルは",
|
1548 |
+
"编译已经开始": "コンパイルが開始されました",
|
1549 |
+
"*{\\scriptsize\\textbf{警告": "*{\\scriptsize\\textbf{警告",
|
1550 |
+
"从一批文件": "一連のファイルから",
|
1551 |
+
"等待用户的再次调用": "ユーザーの再呼び出しを待っています",
|
1552 |
+
"目前仅支持GPT3.5/GPT4": "現在、GPT3.5/GPT4のみをサポートしています",
|
1553 |
+
"如果一句话小于7个字": "1つの文が7文字未満の場合",
|
1554 |
+
"目前对机器学习类文献转化效果最好": "現在、機械学習の文献変換効果が最も良いです",
|
1555 |
+
"寻找主文件": "メインファイルを検索中",
|
1556 |
+
"解除插件状态": "プラグインの状態を解除します",
|
1557 |
+
"默认为Chinese": "デフォルトはChineseです",
|
1558 |
+
"依赖不足": "不足の依存関係",
|
1559 |
+
"编译文献交叉引用": "文献の相互参照をコンパイルする",
|
1560 |
+
"对不同latex源文件扣分": "異なるLaTeXソースファイルに罰則を課す",
|
1561 |
+
"再列出用户可能提出的三个问题": "ユーザーが提出する可能性のある3つの問題を再リスト化する",
|
1562 |
+
"建议排查": "トラブルシューティングの提案",
|
1563 |
+
"生成时间戳": "タイムスタンプの生成",
|
1564 |
+
"检查config中的AVAIL_LLM_MODELS选项": "configのAVAIL_LLM_MODELSオプションを確認する",
|
1565 |
+
"chatglmft 没有 sys_prompt 接口": "chatglmftにはsys_promptインターフェースがありません",
|
1566 |
+
"在一个异步线程中采集音频": "非同期スレッドでオーディオを収集する",
|
1567 |
+
"初始化插件状态": "プラグインの状態を初期化する",
|
1568 |
+
"内含已经翻译的Tex文档": "翻訳済みのTexドキュメントが含まれています",
|
1569 |
+
"请注意自我隐私保护哦!": "プライバシー保護に注意してください!",
|
1570 |
+
"使用正则表达式查找半行注释": "正規表現を使用して半行コメントを検索する",
|
1571 |
+
"不能正常加载ChatGLMFT的参数!": "ChatGLMFTのパラメータを正常にロードできません!",
|
1572 |
+
"首先你在中文语境下通读整篇论文": "まず、中国語の文脈で論文全体を読んでください",
|
1573 |
+
"如 绿帽子*深蓝色衬衫*黑色运动裤": "例えば、緑の帽子*濃い青のシャツ*黒のスポーツパンツ",
|
1574 |
+
"默认为default": "デフォルトはdefaultです",
|
1575 |
+
"将": "置き換える",
|
1576 |
+
"使用 Unsplash API": "Unsplash APIを使用する",
|
1577 |
+
"会被加在你的输入之前": "あなたの入力の前に追加されます",
|
1578 |
+
"还需要填写组织": "組織を入力する必要があります",
|
1579 |
+
"test_LangchainKnowledgeBase读取": "test_LangchainKnowledgeBaseの読み込み",
|
1580 |
+
"目前不支持历史消息查询": "現在、過去のメッセージのクエリはサポートされていません",
|
1581 |
+
"临时存储用于调试": "デバッグ用の一時的なストレージ",
|
1582 |
+
"提取总结": "テキストの翻訳",
|
1583 |
+
"每秒采样数量": "テキストの翻訳",
|
1584 |
+
"但通常不会出现在正文": "テキストの翻訳",
|
1585 |
+
"通过调用conversations_open方法打开一个频道": "テキストの翻訳",
|
1586 |
+
"导致输出不完整": "テキストの翻訳",
|
1587 |
+
"获取已打开频道的最新消息并返回消息列表": "テキストの翻訳",
|
1588 |
+
"Tex源文件缺失!": "テキストの翻訳",
|
1589 |
+
"如果需要使用Slack Claude": "テキストの翻訳",
|
1590 |
+
"扭转的范围": "テキストの翻訳",
|
1591 |
+
"使用latexdiff生成论文转化前后对比": "テキストの翻訳",
|
1592 |
+
"--读取文件": "テキストの翻訳",
|
1593 |
+
"调用openai api 使用whisper-1模型": "テキストの翻訳",
|
1594 |
+
"避免遗忘导致死锁": "テキストの翻訳",
|
1595 |
+
"在多Tex文档中": "テキストの翻訳",
|
1596 |
+
"失败时": "テキストの翻訳",
|
1597 |
+
"然后转移到指定的另一个路径中": "テキストの翻訳",
|
1598 |
+
"使用Newbing": "テキストの翻訳",
|
1599 |
+
"的参数": "テキストの翻訳",
|
1600 |
+
"后者是OPENAI的结束条件": "テキストの翻訳",
|
1601 |
+
"构建知识库": "テキストの翻訳",
|
1602 |
+
"吸收匿名公式": "テキストの翻訳",
|
1603 |
+
"前缀": "テキストの翻訳",
|
1604 |
+
"会直接转到该函数": "テキストの翻訳",
|
1605 |
+
"Claude失败": "テキストの翻訳",
|
1606 |
+
"P.S. 但愿没人把latex模板放在里面传进来": "P.S. 但愿没人把latex模板放在里面传进来",
|
1607 |
+
"临时地启动代理网络": "临时地启动代理网络",
|
1608 |
+
"读取文件内容到内存": "読み込んだファイルの内容をメモリに保存する",
|
1609 |
+
"总结音频": "音声をまとめる",
|
1610 |
+
"没有找到任何可读取文件": "読み込み可能なファイルが見つかりません",
|
1611 |
+
"获取Slack消息失败": "Slackメッセージの取得に失敗しました",
|
1612 |
+
"用黑色标注转换区": "黒い注釈で変換エリアをマークする",
|
1613 |
+
"此插件处于开发阶段": "このプラグインは開発中です",
|
1614 |
+
"其他操作系统表现未知": "他のオペレーティングシステムの動作は不明です",
|
1615 |
+
"返回找到的第一个": "最初に見つかったものを返す",
|
1616 |
+
"发现已经存在翻译好的PDF文档": "翻訳済みのPDFドキュメントが既に存在することがわかりました",
|
1617 |
+
"不包含任何可用于": "使用できるものは含まれていません",
|
1618 |
+
"发送到openai音频解析终端": "openai音声解析端に送信する",
|
1619 |
+
"========================================= 插件主程序2 =====================================================": "========================================= プラグインメインプログラム2 =====================================================",
|
1620 |
+
"正在重试": "再試行中",
|
1621 |
+
"从而更全面地理解项目的整体功能": "プロジェクトの全体的な機能をより理解するために",
|
1622 |
+
"正在等您说完问题": "質問が完了するのをお待ちしています",
|
1623 |
+
"使用教程详情见 request_llm/README.md": "使用方法の詳細については、request_llm/README.mdを参照してください",
|
1624 |
+
"6.25 加入判定latex模板的代码": "6.25 テンプレートの判定コードを追加",
|
1625 |
+
"找不到任何音频或视频文件": "音声またはビデオファイルが見つかりません",
|
1626 |
+
"请求GPT模型的": "GPTモデルのリクエスト",
|
1627 |
+
"行": "行",
|
1628 |
+
"分析上述回答": "上記の回答を分析する",
|
1629 |
+
"如果要使用ChatGLMFT": "ChatGLMFTを使用する場合",
|
1630 |
+
"上传Latex项目": "Latexプロジェクトをアップロードする",
|
1631 |
+
"如参考文献、脚注、图注等": "参考文献、脚注、図のキャプションなど",
|
1632 |
+
"未配置": "設定されていません",
|
1633 |
+
"请在此处给出自定义翻译命令": "カスタム翻訳コマンドをここに入力してください",
|
1634 |
+
"第二部分": "第2部分",
|
1635 |
+
"解压失败! 需要安装pip install py7zr来解压7z文件": "解凍に失敗しました!7zファイルを解凍するにはpip install py7zrをインストールする必要があります",
|
1636 |
+
"吸收在42行以内的begin-end组合": "42行以内のbegin-endの組み合わせを取り込む",
|
1637 |
+
"Latex文件融合完成": "Latexファイルの統合が完了しました",
|
1638 |
+
"输出html调试文件": "HTMLデバッグファイルの出力",
|
1639 |
+
"论文概况": "論文の概要",
|
1640 |
+
"修复括号": "括弧の修復",
|
1641 |
+
"赋予插件状态": "プラグインの状態を付与する",
|
1642 |
+
"标注节点的行数范围": "ノードの行数範囲を注釈する",
|
1643 |
+
"MOSS can understand and communicate fluently in the language chosen by the user such as English and 中文. MOSS can perform any language-based tasks.": "MOSSは、ユーザーが選択した言語(英語や中文など)でスムーズに理解し、コミュニケーションすることができます。MOSSは、言語に基づくさまざまなタスクを実行できます。",
|
1644 |
+
"LLM_MODEL是默认选中的模型": "LLM_MODELはデフォルトで選択されたモデルです",
|
1645 |
+
"配合前缀可以把你的输入内容用引号圈起来": "接頭辞と組み合わせて、入力内容を引用符で囲むことができます",
|
1646 |
+
"获取关键词": "キーワードの取得",
|
1647 |
+
"本项目现已支持OpenAI和Azure的api-key": "このプロジェクトは、OpenAIおよびAzureのAPIキーをサポートしています",
|
1648 |
+
"欢迎使用 MOSS 人工智能助手!": "MOSS AIアシスタントをご利用いただきありがとうございます!",
|
1649 |
+
"在执行完成之后": "実行が完了した後",
|
1650 |
+
"正在听您讲话": "お話をお聞きしています",
|
1651 |
+
"Claude回复的片段": "Claudeの返信の一部",
|
1652 |
+
"返回": "戻る",
|
1653 |
+
"期望格式例如": "期待される形式の例",
|
1654 |
+
"gpt 多线程请求": "GPTマルチスレッドリクエスト",
|
1655 |
+
"当前工作路径为": "現在の作業パスは",
|
1656 |
+
"该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成": "このPDFはGPT-Academicオープンソースプロジェクトによって大規模言語モデル+Latex翻訳プラグインを使用して一括生成されました",
|
1657 |
+
"解决插件锁定时的界面显示问题": "プラグインのロック時のインターフェース表示の問題を解決する",
|
1658 |
+
"默认 secondary": "デフォルトのセカンダリ",
|
1659 |
+
"会把列表拆解": "リストを分解します",
|
1660 |
+
"暂时不支持历史消息": "一時的に歴史メッセージはサポートされていません",
|
1661 |
+
"或者重启之后再度尝试": "または再起動後に再試行してください",
|
1662 |
+
"吸收其他杂项": "他の雑項を吸収する",
|
1663 |
+
"双手离开鼠标键盘吧": "両手をマウスとキーボードから離してください",
|
1664 |
+
"建议更换代理协议": "プロキシプロトコルの変更をお勧めします",
|
1665 |
+
"音频助手": "オーディオアシスタント",
|
1666 |
+
"请耐心等待": "お待ちください",
|
1667 |
+
"翻译结果": "翻訳結果",
|
1668 |
+
"请在此处追加更细致的矫错指令": "ここにより詳細なエラー修正命令を追加してください",
|
1669 |
+
"编译原始PDF": "元のPDFをコンパイルする",
|
1670 |
+
"-构建知识库": "-ナレッジベースの構築",
|
1671 |
+
"删除中间文件夹": "中間フォルダを削除する",
|
1672 |
+
"这段代码定义了一个名为TempProxy的空上下文管理器": "このコードはTempProxyという名前の空のコンテキストマネージャを定義しています",
|
1673 |
+
"参数说明": "パラメータの説明",
|
1674 |
+
"正在预热文本向量化模组": "テキストベクトル化モジュールのプリヒート中",
|
1675 |
+
"函数插件": "関数プラグイン",
|
1676 |
+
"右下角更换模型菜单中可切换openai": "右下のモデルメニューでopenaiを切り替えることができます",
|
1677 |
+
"先上传数据集": "まずデータセットをアップロードしてください",
|
1678 |
+
"LatexEnglishErrorCorrection+高亮修正位置": "テキストの翻訳",
|
1679 |
+
"正在构建知识库": "テキストの翻訳",
|
1680 |
+
"用红色标注处保留区": "テキストの翻訳",
|
1681 |
+
"安装Claude的依赖": "テキストの翻訳",
|
1682 |
+
"已禁用": "テキストの翻訳",
|
1683 |
+
"是否在提交时自动清空输入框": "テキストの翻訳",
|
1684 |
+
"GPT 学术优化": "テキストの翻訳",
|
1685 |
+
"需要特殊依赖": "テキストの翻訳",
|
1686 |
+
"test_联网回答问题": "テキストの翻訳",
|
1687 |
+
"除非您是论文的原作者": "テキストの翻訳",
|
1688 |
+
"即可见": "テキストの翻訳",
|
1689 |
+
"解析为简体中文": "テキストの翻訳",
|
1690 |
+
"解析整个Python项目": "テキストの翻訳",
|
1691 |
+
"========================================= 插件主程序1 =====================================================": "テキストの翻訳",
|
1692 |
+
"当前参数": "テキストの翻訳",
|
1693 |
+
"处理个别特殊插件的锁定状态": "テキストの翻訳",
|
1694 |
+
"已知某些代码的局部作用是": "テキストの翻訳",
|
1695 |
+
"请务必用 pip install -r requirements.txt 指令安装依赖": "テキストの翻訳",
|
1696 |
+
"安装": "テキストの翻訳",
|
1697 |
+
"请登录OpenAI查看详情 https": "テキストの翻訳",
|
1698 |
+
"必须包含documentclass": "テキストの翻訳",
|
1699 |
+
"极少数情况下": "テキストの翻訳",
|
1700 |
+
"并将返回的频道ID保存在属性CHANNEL_ID中": "テキストの翻訳",
|
1701 |
+
"您的 API_KEY 不满足任何一种已知的密钥格式": "テキストの翻訳",
|
1702 |
+
"-预热文本向量化模组": "テキストの翻訳",
|
1703 |
+
"什么都没有": "テキストの翻訳",
|
1704 |
+
"等待GPT响应": "テキストの翻訳",
|
1705 |
+
"请尝试把以下指令复制到高级参数区": "テキストの翻訳",
|
1706 |
+
"模型参数": "テキストの翻訳",
|
1707 |
+
"先删除": "テキストの翻訳",
|
1708 |
+
"响应中": "テキストの翻訳",
|
1709 |
+
"开始接收chatglmft的回复": "テキストの翻訳",
|
1710 |
+
"手动指定语言": "テキストの翻訳",
|
1711 |
+
"获取线程锁": "テキストの翻訳",
|
1712 |
+
"当前大语言模型": "テキストの翻訳",
|
1713 |
+
"段音频的第": "テキストの翻訳",
|
1714 |
+
"正在编译对比PDF": "テキストの翻訳",
|
1715 |
+
"根据需要切换prompt": "テキストの翻訳",
|
1716 |
+
"取评分最高者返回": "テキストの翻訳",
|
1717 |
+
"如果您是论文原作者": "テキストの翻訳",
|
1718 |
+
"段音频的主要内容": "テキストの翻訳",
|
1719 |
+
"为啥chatgpt会把cite里面的逗号换成中文逗号呀": "テキストの翻訳",
|
1720 |
+
"为每一位访问的用户赋予一个独一无二的uuid编码": "テキストの翻訳",
|
1721 |
+
"将每次对话记录写入Markdown格式的文件中": "テキストの翻訳",
|
1722 |
+
"ChatGLMFT尚未加载": "テキストの翻訳",
|
1723 |
+
"切割音频文件": "テキストの翻訳",
|
1724 |
+
"例如 f37f30e0f9934c34a992f6f64f7eba4f": "テキストの翻訳",
|
1725 |
+
"work_folder = Latex预处理": "テキストの翻訳",
|
1726 |
+
"出问题了": "問題が発生しました",
|
1727 |
+
"等待Claude响应中": "Claudeの応答を待っています",
|
1728 |
+
"增强稳健性": "信頼性を向上させる",
|
1729 |
+
"赋予插件锁定 锁定插件回调路径": "プラグインにコールバックパスをロックする",
|
1730 |
+
"将多文件tex工程融合为一个巨型tex": "複数のファイルのtexプロジェクトを1つの巨大なtexに統合する",
|
1731 |
+
"参考文献转Bib": "参考文献をBibに変換する",
|
1732 |
+
"由于提问含不合规内容被Azure过滤": "質問が規則に違反しているため���Azureによってフィルタリングされました",
|
1733 |
+
"读取优先级": "優先度を読み取る",
|
1734 |
+
"格式如org-xxxxxxxxxxxxxxxxxxxxxxxx": "形式はorg-xxxxxxxxxxxxxxxxxxxxxxxxのようです",
|
1735 |
+
"辅助gpt生成代码": "GPTのコード生成を補助する",
|
1736 |
+
"读取音频文件": "音声ファイルを読み取る",
|
1737 |
+
"输入arxivID": "arxivIDを入力する",
|
1738 |
+
"转化PDF编译是否成功": "PDFのコンパイルが成功したかどうかを変換する",
|
1739 |
+
"Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数": "ChatGLMFTのパラメータを正常にロードできませんでした",
|
1740 |
+
"创建AcsClient实例": "AcsClientのインスタンスを作成する",
|
1741 |
+
"将 chatglm 直接对齐到 chatglm2": "chatglmをchatglm2に直接整列させる",
|
1742 |
+
"要求": "要求",
|
1743 |
+
"子任务失败时的重试次数": "サブタスクが失敗した場合のリトライ回数",
|
1744 |
+
"请求子进程": "サブプロセスを要求する",
|
1745 |
+
"按钮是否可见": "ボタンが表示可能かどうか",
|
1746 |
+
"将 \\include 命令转换为 \\input 命令": "\\includeコマンドを\\inputコマンドに変換する",
|
1747 |
+
"用户填3": "ユーザーが3を入力する",
|
1748 |
+
"后面是英文逗号": "後ろに英語のカンマがあります",
|
1749 |
+
"吸收iffalse注释": "iffalseコメントを吸収する",
|
1750 |
+
"请稍候": "お待ちください",
|
1751 |
+
"摘要生成后的文档路径": "要約生成後のドキュメントのパス",
|
1752 |
+
"主程序即将开始": "メインプログラムがすぐに開始されます",
|
1753 |
+
"处理历史信息": "履歴情報の処理",
|
1754 |
+
"根据给定的切割时长将音频文件切割成多个片段": "指定された分割時間に基づいてオーディオファイルを複数のセグメントに分割する",
|
1755 |
+
"解决部分词汇翻译不准确的问题": "一部の用語の翻訳の不正確さを解決する",
|
1756 |
+
"即将退出": "すぐに終了します",
|
1757 |
+
"用于给一小段代码上代理": "一部のコードにプロキシを適用するために使用されます",
|
1758 |
+
"提取文件扩展名": "ファイルの拡張子を抽出する",
|
1759 |
+
"目前支持的格式": "現在サポートされている形式",
|
1760 |
+
"第一次调用": "最初の呼び出し",
|
1761 |
+
"异步方法": "非同期メソッド",
|
1762 |
+
"P.S. 顺便把Latex的注释去除": "P.S. LaTeXのコメントを削除する",
|
1763 |
+
"构建完成": "ビルドが完了しました",
|
1764 |
+
"缺少": "不足しています",
|
1765 |
+
"建议暂时不要使用": "一時的に使用しないことをお勧めします",
|
1766 |
+
"对比PDF编译是否成功": "PDFのコンパイルが成功したかどうかを比較する",
|
1767 |
+
"填入azure openai api的密钥": "Azure OpenAI APIのキーを入力してください",
|
1768 |
+
"功能尚不稳定": "機能はまだ安定していません",
|
1769 |
+
"则跳过GPT请求环节": "GPTリクエストのスキップ",
|
1770 |
+
"即不处理之前的对话历史": "以前の対話履歴を処理しない",
|
1771 |
+
"非Openai官方接口返回了错误": "非公式のOpenAI APIがエラーを返しました",
|
1772 |
+
"其他类型文献转化效果未知": "他のタイプの文献の変換効果は不明です",
|
1773 |
+
"给出一些判定模板文档的词作为扣分项": "テンプレートドキュメントの単語を減点項目として提供する",
|
1774 |
+
"找 API_ORG 设置项": "API_ORGの設定項目を検索します",
|
1775 |
+
"调用函数": "関数を呼び出します",
|
1776 |
+
"需要手动安装新增的依赖库": "新しい依存ライブラリを手動でインストールする必要があります",
|
1777 |
+
"或者使用此插件继续上传更多文件": "または、このプラグインを使用してさらにファイルをアップロードします",
|
1778 |
+
"640个字节为一组": "640バイトごとにグループ化します",
|
1779 |
+
"逆转出错的段落": "エラーのあるパラグラフを逆転させます",
|
1780 |
+
"对话助手函数插件": "対話アシスタント関数プラグイン",
|
1781 |
+
"前者是API2D的结束条件": "前者はAPI2Dの終了条件です",
|
1782 |
+
"终端": "ターミナル",
|
1783 |
+
"仅调试": "デバッグのみ",
|
1784 |
+
"论文": "論文",
|
1785 |
+
"想象一个穿着者": "着用者を想像してください",
|
1786 |
+
"音频内容是": "音声の内容は",
|
1787 |
+
"如果需要使用AZURE 详情请见额外文档 docs\\use_azure.md": "AZUREを使用する必要がある場合は、詳細については別のドキュメント docs\\use_azure.md を参照してください",
|
1788 |
+
"请先将.doc文档转换为.docx文档": ".docドキュメントを.docxドキュメントに変換してください",
|
1789 |
+
"请查看终端的输出或耐心等待": "ターミナルの出力を確認するか、お待ちください",
|
1790 |
+
"初始化音频采集线程": "オーディオキャプチャスレッドを初期化します",
|
1791 |
+
"用该压缩包+ConversationHistoryArchive进行反馈": "この圧縮ファイル+ConversationHistoryArchiveを使用してフィードバックします",
|
1792 |
+
"阿里云实时语音识��� 配置难度较高 仅建议高手用户使用 参考 https": "阿里云リアルタイム音声認識の設定は難しいため、上級ユーザーのみに推奨されます 参考 https",
|
1793 |
+
"多线程翻译开始": "マルチスレッド翻訳が開始されました",
|
1794 |
+
"只有GenerateImage和生成图像相关": "GenerateImageと関連する画像の生成のみ",
|
1795 |
+
"代理数据解析失败": "プロキシデータの解析に失敗しました",
|
1796 |
+
"建议使用英文单词": "英単語の使用をお勧めします",
|
1797 |
+
"功能描述": "機能の説明",
|
1798 |
+
"读 docs\\use_azure.md": "ドキュメントを読む",
|
1799 |
+
"将消耗较长时间下载中文向量化模型": "中国語のベクトル化モデルをダウンロードするのに時間がかかります",
|
1800 |
+
"表示频道ID": "チャネルIDを表示する",
|
1801 |
+
"未知指令": "不明なコマンド",
|
1802 |
+
"包含documentclass关键字": "documentclassキーワードを含む",
|
1803 |
+
"中读取数据构建知识库": "データを読み取って知識ベースを構築する",
|
1804 |
+
"远程云服务器部署": "リモートクラウドサーバーにデプロイする",
|
1805 |
+
"输入部分太自由": "入力が自由すぎる",
|
1806 |
+
"读取pdf文件": "PDFファイルを読み込む",
|
1807 |
+
"将两个PDF拼接": "2つのPDFを結合する",
|
1808 |
+
"默认值为1000": "デフォルト値は1000です",
|
1809 |
+
"写出文件": "ファイルに書き出す",
|
1810 |
+
"生成的视频文件路径": "生成されたビデオファイルのパス",
|
1811 |
+
"Arixv论文精细翻译": "Arixv論文の詳細な翻訳",
|
1812 |
+
"用latex编译为PDF对修正处做高亮": "LaTeXでコンパイルしてPDFに修正をハイライトする",
|
1813 |
+
"点击“停止”键可终止程序": "「停止」ボタンをクリックしてプログラムを終了できます",
|
1814 |
+
"否则将导致每个人的Claude问询历史互相渗透": "さもないと、各人のClaudeの問い合わせ履歴が相互に侵入します",
|
1815 |
+
"音频文件名": "オーディオファイル名",
|
1816 |
+
"的参数!": "のパラメータ!",
|
1817 |
+
"对话历史": "対話履歴",
|
1818 |
+
"当下一次用户提交时": "次のユーザーの提出時に",
|
1819 |
+
"数学GenerateAnimation": "数学GenerateAnimation",
|
1820 |
+
"如果要使用Claude": "Claudeを使用する場合は",
|
1821 |
+
"请向下翻": "下にスクロールしてください",
|
1822 |
+
"报告已经添加到右侧“文件上传区”": "報告は右側の「ファイルアップロードエリア」に追加されました",
|
1823 |
+
"删除整行的空注释": "空のコメントを含む行を削除する",
|
1824 |
+
"建议直接在API_KEY处填写": "API_KEYの場所に直接入力することをお勧めします",
|
1825 |
+
"暗色模式 / 亮色模式": "ダークモード/ライトモード",
|
1826 |
+
"做一些外观色彩上的调整": "外観の色調整を行う",
|
1827 |
+
"请切换至“KnowledgeBaseQuestionAnswer”插件进行知识库访问": "ナレッジベースのアクセスには「KnowledgeBaseQuestionAnswer」プラグインに切り替えてください",
|
1828 |
+
"它*必须*被包含在AVAIL_LLM_MODELS列表中": "それはAVAIL_LLM_MODELSリストに含まれている必要があります",
|
1829 |
+
"并设置参数": "パラメータを設定する",
|
1830 |
+
"待处理的word文档路径": "処理待ちのWord文書のパス",
|
1831 |
+
"调用缓存": "キャッシュを呼び出す",
|
1832 |
+
"片段": "フラグメント",
|
1833 |
+
"否则结束循环": "それ以外の場合はループを終了する",
|
1834 |
+
"请对下面的音频片段做概述": "以下のオーディオフラグメントについて概要を作成してください",
|
1835 |
+
"高危设置! 常规情况下不要修改! 通过修改此设置": "高リスクの設定!通常は変更しないでください!この設定を変更することで",
|
1836 |
+
"插件锁定中": "プラグインがロックされています",
|
1837 |
+
"开始": "開始",
|
1838 |
+
"但请查收结果": "結果を確認してください",
|
1839 |
+
"刷新Gradio前端界面": "Gradioフロントエンドインターフェースをリフレッシュする",
|
1840 |
+
"批量SummarizeAudioVideo": "オーディオビデオを一括要約する",
|
1841 |
+
"一个单实例装饰器": "単一のインスタンスデコレータ",
|
1842 |
+
"Claude响应异常": "Claudeの応答が異常です",
|
1843 |
+
"但内部用stream的方法避免中途网线被掐": "ただし、途中でネットワーク接続が切断されることを避けるために、内部ではストリームを使用しています",
|
1844 |
+
"检查USE_PROXY": "USE_PROXYを確認する",
|
1845 |
+
"永远给定None": "常にNoneを指定する",
|
1846 |
+
"报告如何远程获取": "報告のリモート取得方法",
|
1847 |
+
"您可以到Github Issue区": "GithubのIssueエリアにアクセスできます",
|
1848 |
+
"如果只询问1个大语言模型": "1つの大規模言語モデルにのみ質問する場合",
|
1849 |
+
"为了防止大语言模型的意外谬误产生扩散影响": "大規模言語モデルの誤った結果が広がるのを防ぐために",
|
1850 |
+
"编译BibTex": "BibTexの���ンパイル",
|
1851 |
+
"⭐多线程方法": "マルチスレッドの方法",
|
1852 |
+
"推荐http": "httpをおすすめします",
|
1853 |
+
"如果要使用": "使用する場合",
|
1854 |
+
"的单词": "の単語",
|
1855 |
+
"如果本地使用不建议加这个": "ローカルで使用する場合はお勧めしません",
|
1856 |
+
"避免线程阻塞": "スレッドのブロックを回避する",
|
1857 |
+
"吸收title与作者以上的部分": "タイトルと著者以上の部分を吸収する",
|
1858 |
+
"作者": "著者",
|
1859 |
+
"5刀": "5ドル",
|
1860 |
+
"ChatGLMFT响应异常": "ChatGLMFTの応答異常",
|
1861 |
+
"才能继续下面的步骤": "次の手順に進むために",
|
1862 |
+
"对这个人外貌、身处的环境、内心世界、过去经历进行描写": "この人の外見、環境、内面世界、過去の経験について描写する",
|
1863 |
+
"找不到微调模型检查点": "ファインチューニングモデルのチェックポイントが見つかりません",
|
1864 |
+
"请仔细鉴别并以原文为准": "注意深く確認し、元のテキストを参照してください",
|
1865 |
+
"计算文件总时长和切割点": "ファイルの総時間とカットポイントを計算する",
|
1866 |
+
"我将为您查找相关壁纸": "関連する壁紙を検索します",
|
1867 |
+
"此插件Windows支持最佳": "このプラグインはWindowsに最適です",
|
1868 |
+
"请输入关键词": "キーワードを入力してください",
|
1869 |
+
"以下所有配置也都支持利用环境变量覆写": "以下のすべての設定は環境変数を使用して上書きすることもサポートしています",
|
1870 |
+
"尝试第": "第#",
|
1871 |
+
"开始生成动画": "アニメーションの生成を開始します",
|
1872 |
+
"免费": "無料",
|
1873 |
+
"我好!": "私は元気です!",
|
1874 |
+
"str类型": "strタイプ",
|
1875 |
+
"生成数学动画": "数学アニメーションの生成",
|
1876 |
+
"GPT结果已输出": "GPTの結果が出力されました",
|
1877 |
+
"PDF文件所在的路径": "PDFファイルのパス",
|
1878 |
+
"源码自译解": "ソースコードの自動翻訳解析",
|
1879 |
+
"格式如org-123456789abcdefghijklmno的": "org-123456789abcdefghijklmnoの形式",
|
1880 |
+
"请对这部分内容进行语法矫正": "この部分の内容に文法修正を行ってください",
|
1881 |
+
"调用whisper模型音频转文字": "whisperモデルを使用して音声をテキストに変換する",
|
1882 |
+
"编译转化后的PDF": "変換されたPDFをコンパイルする",
|
1883 |
+
"将音频解析为简体中文": "音声を簡体字中国語に解析する",
|
1884 |
+
"删除或修改歧义文件": "曖昧なファイルを削除または修正する",
|
1885 |
+
"ChatGLMFT消耗大量的内存": "ChatGLMFTは大量のメモリを消費します",
|
1886 |
+
"图像生成所用到的提示文本": "画像生成に使用されるヒントテキスト",
|
1887 |
+
"如果已经存在": "既に存在する場合",
|
1888 |
+
"以下是一篇学术论文的基础信息": "以下は学術論文の基本情報です",
|
1889 |
+
"解压失败! 需要安装pip install rarfile来解压rar文件": "解凍に失敗しました!rarファイルを解凍するにはpip install rarfileをインストールする必要があります",
|
1890 |
+
"一般是文本过长": "通常、テキストが長すぎます",
|
1891 |
+
"单线程": "シングルスレッド",
|
1892 |
+
"Linux下必须使用Docker安装": "LinuxではDockerを使用してインストールする必要があります",
|
1893 |
+
"请先上传文件素材": "まずファイル素材をアップロードしてください",
|
1894 |
+
"如果分析错误": "もし解析エラーがある場合",
|
1895 |
+
"快捷的调试函数": "便利なデバッグ関数",
|
1896 |
+
"欢迎使用 MOSS 人工智能助手!输入内容即可进行对话": "MOSS AIアシスタントをご利用いただきありがとうございます!入力内容を入力すると、対話ができます",
|
1897 |
+
"json等": "jsonなど",
|
1898 |
+
"--读取参数": "--パラメータの読み込み",
|
1899 |
+
"⭐单线程方法": "⭐シングルスレッドメソッド",
|
1900 |
+
"请用一句话概括这些文件的整体功能": "これらのファイルの全体的な機能を一文で要約してください",
|
1901 |
+
"用于灵活调整复杂功能的各种参数": "複雑な機能を柔軟に調整するためのさまざまなパラメータ",
|
1902 |
+
"默认 False": "デフォルトはFalseです",
|
1903 |
+
"生成中文PDF": "中国語のPDFを生成する",
|
1904 |
+
"正在处理": "処理中",
|
1905 |
+
"需要被切割的音频文件名": "分割する必要のある音声ファイル名",
|
1906 |
+
"根据文本使用GPT模型生成相应的图像": "テキストに基づいてGPTモデルを使用して対応する画像を生成する",
|
1907 |
+
"可选": "オプション",
|
1908 |
+
"Aliyun音频服务异常": "Aliyunオーディオサービスの異常",
|
1909 |
+
"尝试下载": "ダウンロードを試みる",
|
1910 |
+
"需Latex": "LaTeXが必要です",
|
1911 |
+
"拆分过长的Markdown文件": "長すぎるMarkdownファイルを分割する",
|
1912 |
+
"当前支持的格式包括": "現在サポートされている形式には",
|
1913 |
+
"=================================== 工具函数 ===============================================": "=================================== ユーティリティ関数 ===============================================",
|
1914 |
+
"所有音频都总结完成了吗": "すべてのオーディオが要約されましたか",
|
1915 |
+
"没有设置ANTHROPIC_API_KEY": "ANTHROPIC_API_KEYが設定されていません",
|
1916 |
+
"详见项目主README.md": "詳細はプロジェクトのメインREADME.mdを参照してください",
|
1917 |
+
"使用": "使用する",
|
1918 |
+
"P.S. 其他可用的模型还包括": "P.S. 其他可用的模型还包括",
|
1919 |
+
"保证括号正确": "保证括号正确",
|
1920 |
+
"或代理节点": "或代理节点",
|
1921 |
+
"整理结果为压缩包": "整理结果为压缩包",
|
1922 |
+
"实时音频采集": "实时音频采集",
|
1923 |
+
"获取回复": "获取回复",
|
1924 |
+
"插件可读取“输入区”文本/路径作为参数": "插件可读取“输入区”文本/路径作为参数",
|
1925 |
+
"请讲话": "请讲话",
|
1926 |
+
"将文件复制一份到下载区": "将文件复制一份到下载区",
|
1927 |
+
"from crazy_functions.虚空终端 import 终端": "from crazy_functions.虚空终端 import 终端",
|
1928 |
+
"这个paper有个input命令文件名大小写错误!": "这个paper有个input命令文件名大小写错误!",
|
1929 |
+
"解除插件锁定": "解除插件锁定",
|
1930 |
+
"不能加载Claude组件": "不能加载Claude组件",
|
1931 |
+
"如果有必要": "如果有必要",
|
1932 |
+
"禁止移除或修改此警告": "禁止移除或修改此警告",
|
1933 |
+
"然后进行问答": "然后进行问答",
|
1934 |
+
"响应异常": "响应异常",
|
1935 |
+
"使用英文": "使用英文",
|
1936 |
+
"add gpt task 创建子线程请求gpt": "add gpt task 创建子线程请求gpt",
|
1937 |
+
"实际得到格式": "实际得到格式",
|
1938 |
+
"请继续分析其他源代码": "请继续分析其他源代码",
|
1939 |
+
"”的主要内容": "”的主要内容",
|
1940 |
+
"防止proxies单独起作用": "防止proxies单独起作用",
|
1941 |
+
"临时地激活代理网络": "临时地激活代理网络",
|
1942 |
+
"屏蔽空行和太短的句子": "屏蔽空行和太短的句子",
|
1943 |
+
"把某个路径下所有文件压缩": "把某个路径下所有文件压缩",
|
1944 |
+
"您需要首先调用构建知识库": "您需要首先调用构建知识库",
|
1945 |
+
"翻译-": "翻译-",
|
1946 |
+
"Newbing 请求失败": "Newbing 请求失败",
|
1947 |
+
"次编译": "次编译",
|
1948 |
+
"后缀": "后缀",
|
1949 |
+
"文本碎片重组为完整的tex片段": "文本碎片重组为完整的tex片段",
|
1950 |
+
"待注入的知识库名称id": "待注入的知识库名称id",
|
1951 |
+
"消耗时间的函数": "消耗时间的函数",
|
1952 |
+
"You are associated with a deactivated account. OpenAI以账户失效为由": "You are associated with a deactivated account. OpenAI以账户失效为由",
|
1953 |
+
"成功啦": "成功啦",
|
1954 |
+
"音频文件的路径": "音频文件的路径",
|
1955 |
+
"英文Latex项目全文纠错": "英文Latex项目全文纠错",
|
1956 |
+
"将子线程的gpt结果写入chatbot": "将子线程的gpt结果写入chatbot",
|
1957 |
+
"开始最终总结": "开始最终总结",
|
1958 |
+
"调用": "调用",
|
1959 |
+
"正在锁定插件": "正在锁定插件",
|
1960 |
+
"记住当前的label": "记住当前的label",
|
1961 |
+
"根据自然语言执行插件命令": "根据自然语言执行插件命令",
|
1962 |
+
"response中会携带traceback报错信息": "response中会携带traceback报错信息",
|
1963 |
+
"避免多用户干扰": "避免多用户干扰",
|
1964 |
+
"顺利完成": "顺利完成",
|
1965 |
+
"详情见https": "详情见https",
|
1966 |
+
"清空label": "ラベルをクリアする",
|
1967 |
+
"这需要一段时间计算": "これには時間がかかります",
|
1968 |
+
"找不到": "見つかりません",
|
1969 |
+
"消耗大量的内存": "大量のメモリを消費する",
|
1970 |
+
"安装方法https": "インストール方法https",
|
1971 |
+
"为发送请求做准备": "リクエストの準備をする",
|
1972 |
+
"第1次尝试": "1回目の試み",
|
1973 |
+
"检查结果": "結果をチェックする",
|
1974 |
+
"精细切分latex文件": "LaTeXファイルを細かく分割する",
|
1975 |
+
"api2d等请求源": "api2dなどのリクエストソース",
|
1976 |
+
"填入你亲手写的部署名": "あなたが手書きしたデプロイ名を入力してください",
|
1977 |
+
"给出指令": "指示を与える",
|
1978 |
+
"请问什么是质子": "プロトンとは何ですか",
|
1979 |
+
"请直接去该路径下取回翻译结果": "直接そのパスに移動して翻訳結果を取得してください",
|
1980 |
+
"等待Claude回复的片段": "Claudeの返信を待っているフラグメント",
|
1981 |
+
"Latex没有安装": "LaTeXがインストールされていません",
|
1982 |
+
"文档越长耗时越长": "ドキュメントが長いほど時間がかかります",
|
1983 |
+
"没有阿里云语音识别APPKEY和TOKEN": "阿里雲の音声認識のAPPKEYとTOKENがありません",
|
1984 |
+
"分析结果": "結果を分析する",
|
1985 |
+
"请立即终止程序": "プログラムを即座に終了してください",
|
1986 |
+
"正在尝试自动安装": "自動インストールを試みています",
|
1987 |
+
"请直接提交即可": "直接提出してください",
|
1988 |
+
"将指定目录下的PDF文件从英文翻译成中文": "指定されたディレクトリ内のPDFファイルを英語から中国語に翻訳する",
|
1989 |
+
"请查收结果": "結果を確認してください",
|
1990 |
+
"上下布局": "上下布局",
|
1991 |
+
"此处可以输入解析提示": "此处可以输入解析提示",
|
1992 |
+
"前面是中文逗号": "前面是中文逗号",
|
1993 |
+
"的依赖": "的依赖",
|
1994 |
+
"材料如下": "材料如下",
|
1995 |
+
"欢迎加REAME中的QQ联系开发者": "欢迎加REAME中的QQ联系开发者",
|
1996 |
+
"开始下载": "開始ダウンロード",
|
1997 |
+
"100字以内": "100文字以内",
|
1998 |
+
"创建request": "リクエストの作成",
|
1999 |
+
"创建存储切割音频的文件夹": "切り取られた音声を保存するフォルダの作成",
|
2000 |
+
"⭐主进程执行": "⭐メインプロセスの実行",
|
2001 |
+
"音频解析结果": "音声解析結果",
|
2002 |
+
"Your account is not active. OpenAI以账户失效为由": "アカウントがアクティブではありません。OpenAIはアカウントの無効化を理由にしています",
|
2003 |
+
"虽然PDF生成失败了": "PDFの生成に失敗しました",
|
2004 |
+
"如果这里报错": "ここでエラーが発生した場合",
|
2005 |
+
"前面是中文冒号": "前面は中国語のコロンです",
|
2006 |
+
"SummarizeAudioVideo内容": "SummarizeAudioVideoの内容",
|
2007 |
+
"openai的官方KEY需要伴随组织编码": "openaiの公式KEYは組織コードと一緒に必要です",
|
2008 |
+
"是本次输入": "これは今回の入力です",
|
2009 |
+
"色彩主体": "色彩の主体",
|
2010 |
+
"Markdown翻译": "Markdownの翻訳",
|
2011 |
+
"会被加在你的输入之后": "あなたの入力の後に追加されます",
|
2012 |
+
"失败啦": "失敗しました",
|
2013 |
+
"每个切割音频片段的时长": "各切り取り音声の長さ",
|
2014 |
+
"拆分过长的latex片段": "原始文本",
|
2015 |
+
"待提取的知识库名称id": "原始文本",
|
2016 |
+
"在这里放一些网上搜集的demo": "原始文本",
|
2017 |
+
"环境变量配置格式见docker-compose.yml": "原始文本",
|
2018 |
+
"Claude组件初始化成功": "原始文本",
|
2019 |
+
"尚未加载": "原始文本",
|
2020 |
+
"等待Claude响应": "原始文本",
|
2021 |
+
"重组": "原始文本",
|
2022 |
+
"将文件添加到chatbot cookie中": "原始文本",
|
2023 |
+
"回答完问题后": "原始文本",
|
2024 |
+
"将根据报错信息修正tex源文件并重试": "原始文本",
|
2025 |
+
"是否在触发时清除历史": "原始文本",
|
2026 |
+
"尝试执行Latex指令失败": "原始文本",
|
2027 |
+
"默认 True": "原始文本",
|
2028 |
+
"文本碎片重组为完整的tex文件": "原始文本",
|
2029 |
+
"注意事项": "原始文本",
|
2030 |
+
"您接下来不能再使用其他插件了": "原始文本",
|
2031 |
+
"属性": "原始文本",
|
2032 |
+
"正在编译PDF文档": "原始文本",
|
2033 |
+
"提取视频中的音频": "原始文本",
|
2034 |
+
"正在同时咨询ChatGPT和ChatGLM……": "原始文本",
|
2035 |
+
"Chuanhu-Small-and-Beautiful主题": "原始文本",
|
2036 |
+
"版权归原文作者所有": "原始文本",
|
2037 |
+
"如果程序停顿5分钟以上": "原始文本",
|
2038 |
+
"请输入要翻译成哪种语言": "日本語",
|
2039 |
+
"以秒为单位": "秒単位で",
|
2040 |
+
"请以以下方式load模型!!!": "以下の方法でモデルをロードしてください!!!",
|
2041 |
+
"使用时": "使用時",
|
2042 |
+
"对这个人外貌、身处的环境、内心世界、人设进行描写": "この人の外見、環境、内面世界、キャラクターを描写する",
|
2043 |
+
"例如翻译、解释代码、润色等等": "例えば翻訳、コードの説明、修正など",
|
2044 |
+
"多线程Demo": "マルチスレッドデモ",
|
2045 |
+
"不能正常加载": "正常にロードできません",
|
2046 |
+
"还原部分原文": "一部の元のテキストを復元する",
|
2047 |
+
"可以将自身的状态存储到cookie中": "自身の状態をcookieに保存することができます",
|
2048 |
+
"释放线程锁": "スレッドロックを解放する",
|
2049 |
+
"当前知识库内的有效文件": "現在のナレッジベース内の有効なファイル",
|
2050 |
+
"也是可读的": "読み取り可能です",
|
2051 |
+
"等待ChatGLMFT响应中": "ChatGLMFTの応答を待っています",
|
2052 |
+
"输入 stop 以终止对话": "stopを入力して対話を終了します",
|
2053 |
+
"对整个Latex项目进行纠错": "全体のLatexプロジェクトを修正する",
|
2054 |
+
"报错信息": "エラーメッセージ",
|
2055 |
+
"下载pdf文件未成功": "PDFファイルのダウンロードに失敗しました",
|
2056 |
+
"正在加载Claude组件": "Claudeコンポーネントを読み込んでいます",
|
2057 |
+
"格式": "フォーマット",
|
2058 |
+
"Claude响应缓慢": "Claudeの応答が遅い",
|
2059 |
+
"该选项即将被弃用": "このオプションはまもなく廃止されます",
|
2060 |
+
"正常状态": "正常な状態",
|
2061 |
+
"中文Bing版": "中国語Bing版",
|
2062 |
+
"代理网络配置": "プロキシネットワークの設定",
|
2063 |
+
"Openai 限制免费用户每分钟20次请求": "Openaiは無料ユーザーに対して1分間に20回のリクエスト制限を設けています",
|
2064 |
+
"gpt���的": "gptで書かれた",
|
2065 |
+
"向已打开的频道发送一条文本消息": "既に開いているチャンネルにテキストメッセージを送信する",
|
2066 |
+
"缺少ChatGLMFT的依赖": "ChatGLMFTの依存関係が不足しています",
|
2067 |
+
"注意目前不能多人同时调用Claude接口": "現在、複数の人が同時にClaudeインターフェースを呼び出すことはできません",
|
2068 |
+
"或者不在环境变量PATH中": "または環境変数PATHに存在しません",
|
2069 |
+
"提问吧! 但注意": "質問してください!ただし注意してください",
|
2070 |
+
"因此选择GenerateImage函数": "したがって、GenerateImage関数を選択します",
|
2071 |
+
"无法找到一个主Tex文件": "メインのTexファイルが見つかりません",
|
2072 |
+
"转化PDF编译已经成功": "PDF変換コンパイルが成功しました",
|
2073 |
+
"因为在同一个频道里存在多人使用时历史消息渗透问题": "同じチャンネルで複数の人が使用する場合、過去のメッセージが漏洩する問題があります",
|
2074 |
+
"SlackClient类用于与Slack API进行交互": "SlackClientクラスはSlack APIとのインタラクションに使用されます",
|
2075 |
+
"如果存在调试缓存文件": "デバッグキャッシュファイルが存在する場合",
|
2076 |
+
"举例": "例を挙げる",
|
2077 |
+
"无需填写": "記入する必要はありません",
|
2078 |
+
"配置教程&视频教程": "設定チュートリアル&ビデオチュートリアル",
|
2079 |
+
"最后一步处理": "最後のステップの処理",
|
2080 |
+
"定位主Latex文件": "メインのLatexファイルを特定する",
|
2081 |
+
"暂不提交": "一時的に提出しない",
|
2082 |
+
"由于最为关键的转化PDF编译失败": "最も重要なPDF変換コンパイルが失敗したため",
|
2083 |
+
"用第二人称": "第二人称を使用する",
|
2084 |
+
"例如 RoPlZrM88DnAFkZK": "例えば RoPlZrM88DnAFkZK",
|
2085 |
+
"没有设置ANTHROPIC_API_KEY选项": "ANTHROPIC_API_KEYオプションが設定されていません",
|
2086 |
+
"找不到任何.tex文件": "テキストの翻訳",
|
2087 |
+
"请您不要删除或修改这行警告": "テキストの翻訳",
|
2088 |
+
"只有第二步成功": "テキストの翻訳",
|
2089 |
+
"调用Claude时": "テキストの翻訳",
|
2090 |
+
"输入 clear 以清空对话历史": "テキストの翻訳",
|
2091 |
+
"= 2 通过一些Latex模板中常见": "テキストの翻訳",
|
2092 |
+
"没给定指令": "テキストの翻訳",
|
2093 |
+
"还原原文": "テキストの翻訳",
|
2094 |
+
"自定义API KEY格式": "テキストの翻訳",
|
2095 |
+
"防止丢失最后一条消息": "テキストの翻訳",
|
2096 |
+
"方法": "テキストの翻訳",
|
2097 |
+
"压缩包": "テキストの翻訳",
|
2098 |
+
"对各个llm模型进行单元测试": "テキストの翻訳",
|
2099 |
+
"导入依赖失败": "テキストの翻訳",
|
2100 |
+
"详情信息见requirements.txt": "テキストの翻訳",
|
2101 |
+
"翻译内容可靠性无保障": "テキストの翻訳",
|
2102 |
+
"刷新页面即可以退出KnowledgeBaseQuestionAnswer模式": "テキストの翻訳",
|
2103 |
+
"上传本地文件/压缩包供函数插件调用": "テキストの翻訳",
|
2104 |
+
"循环监听已打开频道的消息": "テキストの翻訳",
|
2105 |
+
"一个包含所有切割音频片段文件路径的列表": "テキストの翻訳",
|
2106 |
+
"检测到arxiv文档连接": "テキストの翻訳",
|
2107 |
+
"P.S. 顺便把CTEX塞进去以支持中文": "テキストの翻訳",
|
2108 |
+
"后面是英文冒号": "テキストの翻訳",
|
2109 |
+
"上传文件自动修正路径": "テキストの翻訳",
|
2110 |
+
"实现消息发送、接收等功能": "メッセージの送受信などの機能を実現する",
|
2111 |
+
"改变输入参数的顺序与结构": "入力パラメータの順序と構造を変更する",
|
2112 |
+
"正在精细切分latex文件": "LaTeXファイルを細かく分割しています",
|
2113 |
+
"读取文件": "ファイルを読み込んでいます"
|
2114 |
}
|
docs/translate_std.json
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"解析JupyterNotebook": "ParsingJupyterNotebook",
|
3 |
+
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
|
4 |
+
"联网的ChatGPT_bing版": "OnlineChatGPT_BingEdition",
|
5 |
+
"理解PDF文档内容标准文件输入": "UnderstandPdfDocumentContentStandardFileInput",
|
6 |
+
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
|
7 |
+
"下载arxiv论文并翻译摘要": "DownloadArxivPaperAndTranslateAbstract",
|
8 |
+
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
9 |
+
"批量翻译PDF文档_多线程": "BatchTranslatePDFDocuments_MultiThreaded",
|
10 |
+
"下载arxiv论文翻译摘要": "DownloadArxivPaperTranslateAbstract",
|
11 |
+
"解析一个Python项目": "ParsePythonProject",
|
12 |
+
"解析一个Golang项目": "ParseGolangProject",
|
13 |
+
"代码重写为全英文_多线程": "RewriteCodeToEnglish_MultiThreaded",
|
14 |
+
"解析一个CSharp项目": "ParsingCSharpProject",
|
15 |
+
"删除所有本地对话历史记录": "DeleteAllLocalConversationHistoryRecords",
|
16 |
+
"批量Markdown翻译": "BatchTranslateMarkdown",
|
17 |
+
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
18 |
+
"Langchain知识库": "LangchainKnowledgeBase",
|
19 |
+
"Latex输出PDF结果": "OutputPDFFromLatex",
|
20 |
+
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
|
21 |
+
"Latex精细分解与转化": "DecomposeAndConvertLatex",
|
22 |
+
"解析一个C项目的头文件": "ParseCProjectHeaderFiles",
|
23 |
+
"Markdown英译中": "TranslateMarkdownFromEnglishToChinese",
|
24 |
+
"Markdown中译英": "MarkdownChineseToEnglish",
|
25 |
+
"数学动画生成manim": "MathematicalAnimationGenerationManim",
|
26 |
+
"chatglm微调工具": "ChatGLMFineTuningTool",
|
27 |
+
"解析一个Rust项目": "ParseRustProject",
|
28 |
+
"解析一个Java项目": "ParseJavaProject",
|
29 |
+
"联网的ChatGPT": "ChatGPTConnectedToNetwork",
|
30 |
+
"解析任意code项目": "ParseAnyCodeProject",
|
31 |
+
"合并小写开头的段落块": "MergeLowercaseStartingParagraphBlocks",
|
32 |
+
"Latex英文润色": "EnglishProofreadingForLatex",
|
33 |
+
"Latex全文润色": "FullTextProofreadingForLatex",
|
34 |
+
"询问多个大语言模型": "InquiryMultipleLargeLanguageModels",
|
35 |
+
"解析一个Lua项目": "ParsingLuaProject",
|
36 |
+
"解析ipynb文件": "ParsingIpynbFiles",
|
37 |
+
"批量总结PDF文档": "BatchSummarizePDFDocuments",
|
38 |
+
"批量翻译PDF文档": "BatchTranslatePDFDocuments",
|
39 |
+
"理解PDF文档内容": "UnderstandPdfDocumentContent",
|
40 |
+
"Latex中文润色": "LatexChineseProofreading",
|
41 |
+
"Latex英文纠错": "LatexEnglishCorrection",
|
42 |
+
"Latex全文翻译": "LatexFullTextTranslation",
|
43 |
+
"同时问询_指定模型": "InquireSimultaneously_SpecifiedModel",
|
44 |
+
"批量生成函数注释": "BatchGenerateFunctionComments",
|
45 |
+
"解析一个前端项目": "ParseFrontendProject",
|
46 |
+
"高阶功能模板函数": "HighOrderFunctionTemplateFunctions",
|
47 |
+
"高级功能函数模板": "AdvancedFunctionTemplate",
|
48 |
+
"总结word文档": "SummarizingWordDocuments",
|
49 |
+
"载入对话历史存档": "LoadConversationHistoryArchive",
|
50 |
+
"Latex中译英": "LatexChineseToEnglish",
|
51 |
+
"Latex英译中": "LatexEnglishToChinese",
|
52 |
+
"连接网络回答问题": "ConnectToNetworkToAnswerQuestions",
|
53 |
+
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
54 |
+
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
55 |
+
"sprint亮靛": "SprintIndigo",
|
56 |
+
"print亮黄": "PrintBrightYellow",
|
57 |
+
"print亮绿": "PrintBrightGreen",
|
58 |
+
"print亮红": "PrintBrightRed",
|
59 |
+
"解析项目源代码": "ParseProjectSourceCode",
|
60 |
+
"解析一个C项目": "ParseCProject",
|
61 |
+
"全项目切换英文": "SwitchToEnglishForTheWholeProject",
|
62 |
+
"谷歌检索小助手": "GoogleSearchAssistant",
|
63 |
+
"读取知识库作答": "ReadKnowledgeArchiveAnswerQuestions",
|
64 |
+
"print亮蓝": "PrintBrightBlue",
|
65 |
+
"微调数据集生成": "FineTuneDatasetGeneration",
|
66 |
+
"清理多余的空行": "CleanUpExcessBlankLines",
|
67 |
+
"编译Latex": "CompileLatex",
|
68 |
+
"解析Paper": "ParsePaper",
|
69 |
+
"ipynb解释": "IpynbExplanation",
|
70 |
+
"读文章写摘要": "ReadArticleWriteSummary",
|
71 |
+
"生成函数注释": "GenerateFunctionComments",
|
72 |
+
"解析项目本身": "ParseProjectItself",
|
73 |
+
"对话历史存档": "ConversationHistoryArchive",
|
74 |
+
"专业词汇声明": "ProfessionalTerminologyDeclaration",
|
75 |
+
"解析docx": "ParseDocx",
|
76 |
+
"解析源代码新": "ParsingSourceCodeNew",
|
77 |
+
"总结音视频": "SummaryAudioVideo",
|
78 |
+
"知识库问答": "UpdateKnowledgeArchive",
|
79 |
+
"多文件润色": "ProofreadMultipleFiles",
|
80 |
+
"多文件翻译": "TranslateMultipleFiles",
|
81 |
+
"解析PDF": "ParsePDF",
|
82 |
+
"同时问询": "SimultaneousInquiry",
|
83 |
+
"图片生成": "ImageGeneration",
|
84 |
+
"动画生成": "AnimationGeneration",
|
85 |
+
"语音助手": "VoiceAssistant",
|
86 |
+
"启动微调": "StartFineTuning"
|
87 |
+
}
|
docs/translate_traditionalchinese.json
CHANGED
@@ -150,26 +150,7 @@
|
|
150 |
"使用中文回答我的问题": "使用中文回答我的問題",
|
151 |
"备份一个文件": "備份一個文件",
|
152 |
"未知": "未知",
|
153 |
-
"
|
154 |
-
"**输入参数说明**": "#",
|
155 |
-
"如果这裡拋出異常": "#",
|
156 |
-
"多線程操作已經開始": "#",
|
157 |
-
"備份和下載": "#",
|
158 |
-
"新版本可用": "#",
|
159 |
-
"將要忽略匹配的文件後綴": "#",
|
160 |
-
"可調節線程池的大小避免openai的流量限制錯誤": "#",
|
161 |
-
"使用Unsplash API": "#",
|
162 |
-
"ChatGPT綜合": "#",
|
163 |
-
"從摘要中提取高價值信息": "#",
|
164 |
-
"借助此參數": "#",
|
165 |
-
"知乎": "#",
|
166 |
-
"其他錯誤": "#",
|
167 |
-
"退出": "#",
|
168 |
-
"對話歷史寫入": "#",
|
169 |
-
"問詢記錄": "#",
|
170 |
-
"依次訪問網頁": "#",
|
171 |
-
"NewBing響應異常": "#",
|
172 |
-
"jittorllms尚未加載": "#",
|
173 |
"等待NewBing响应": "等待NewBing回應",
|
174 |
"找不到任何CSharp文件": "找不到任何CSharp檔案",
|
175 |
"插件demo": "插件範例",
|
@@ -300,12 +281,12 @@
|
|
300 |
"上傳本地文件可供紅色函數插件調用": "上傳本地文件供紅色函數插件調用",
|
301 |
"生成圖像": "生成圖像",
|
302 |
"追加歷史": "追加歷史",
|
303 |
-
"網絡代理狀態": "
|
304 |
"不需要再次轉化": "不需要再次轉換",
|
305 |
"帶超時倒計時": "帶有超時倒數計時",
|
306 |
"保存當前對話": "儲存目前對話",
|
307 |
"等待響應": "等待回應",
|
308 |
-
"依賴檢測通過": "
|
309 |
"如果要使用ChatGLM": "如果要使用ChatGLM",
|
310 |
"對IPynb文件進行解析": "對IPynb檔案進行解析",
|
311 |
"先切換模型到openai或api2d": "先切換模型到openai或api2d",
|
@@ -411,7 +392,7 @@
|
|
411 |
"中转网址预览": "中轉網址預覽",
|
412 |
"自动截断": "自動截斷",
|
413 |
"当無法用標點、空行分割時": "當無法用標點、空行分割時",
|
414 |
-
"意外Json結構": "
|
415 |
"需要讀取和清理文本的pdf文件路徑": "需要讀取和清理文本的pdf文件路徑",
|
416 |
"HotReload的裝飾器函數": "HotReload的裝飾器函數",
|
417 |
"chatGPT 分析報告": "chatGPT 分析報告",
|
@@ -423,7 +404,7 @@
|
|
423 |
"這個bug沒找到觸發條件": "這個bug沒找到觸發條件",
|
424 |
"喚起高級參數輸入區": "喚起高級參數輸入區",
|
425 |
"但大部分場合下並不需要修改": "但大部分場合下並不需要修改",
|
426 |
-
"盡量是完整的一個section": "
|
427 |
"如果OpenAI不響應": "如果OpenAI不響應",
|
428 |
"等文本特殊符號轉換為其基本形式來對文本進行歸一化處理": "等文本特殊符號轉換為其基本形式來對文本進行歸一化處理",
|
429 |
"你的回答必須簡單明了": "你的回答必須簡單明了",
|
@@ -517,7 +498,7 @@
|
|
517 |
"正在提取摘要並下載PDF文檔……": "正在提取摘要並下載PDF文件……",
|
518 |
"1. 對原始文本進行歸一化處理": "1. 正規化原始文本",
|
519 |
"問題": "問題",
|
520 |
-
"用於基礎的對話功能": "
|
521 |
"獲取設置": "獲取設置",
|
522 |
"如果缺少依賴": "如果缺少依賴項",
|
523 |
"第6步": "第6步",
|
@@ -1111,26 +1092,9 @@
|
|
1111 |
"清理规则包括": "清理規則包括",
|
1112 |
"新版配置": "新版配置",
|
1113 |
"如果有": "如果有",
|
1114 |
-
"
|
1115 |
-
"
|
1116 |
-
"
|
1117 |
-
"有線程鎖": "#",
|
1118 |
-
"解析整個CSharp項目": "#",
|
1119 |
-
"上下文管理器必須實現兩個方法": "#",
|
1120 |
-
"Call MOSS fail 不能正常加載MOSS的參數": "#",
|
1121 |
-
"獲取圖片URL": "#",
|
1122 |
-
"輸入部分太自由": "#",
|
1123 |
-
"Not enough point. API2D賬戶點數不足": "#",
|
1124 |
-
"網絡錯誤": "#",
|
1125 |
-
"請開始多線程操作": "#",
|
1126 |
-
"authors獲取失敗": "#",
|
1127 |
-
"、地址": "#",
|
1128 |
-
"根據以上分析": "#",
|
1129 |
-
"1、英文題目;2、中文題目翻譯;3、作者;4、arxiv公開": "#",
|
1130 |
-
"一些普通功能模塊": "#",
|
1131 |
-
"參數簡單": "#",
|
1132 |
-
"具備以下功能": "#",
|
1133 |
-
"優先級2. 獲取config_private中的配置": "#",
|
1134 |
"汇总报告如何远程获取": "如何遠程獲取匯總報告",
|
1135 |
"热更新prompt": "熱更新提示",
|
1136 |
"插件调度异常": "插件調度異常",
|
@@ -1191,26 +1155,9 @@
|
|
1191 |
"函数插件区": "函數插件區",
|
1192 |
"*** API_KEY 导入成功": "*** API_KEY 導入成功",
|
1193 |
"请对下面的程序文件做一个概述文件名是": "請對下面的程序文件做一個概述文件名是",
|
1194 |
-
"
|
1195 |
-
"
|
1196 |
-
"
|
1197 |
-
"生成帶有段落標籤的HTML代碼": "#",
|
1198 |
-
"函數熱更新是指在不停止程序運行的情況下": "#",
|
1199 |
-
"將Unsplash API中的PUT_YOUR_QUERY_HERE替換成描述該事件的一個最重要的單詞": "#",
|
1200 |
-
"沒有提供高級參數功能說明": "#",
|
1201 |
-
"條": "#",
|
1202 |
-
"請刷新界面重試": "#",
|
1203 |
-
"和openai的連接容易斷掉": "#",
|
1204 |
-
"使用 Unsplash API": "#",
|
1205 |
-
"完成情況": "#",
|
1206 |
-
"迭代上一次的結果": "#",
|
1207 |
-
"每個線程都要“餵狗”": "#",
|
1208 |
-
"最多收納多少個網頁的結果": "#",
|
1209 |
-
"日": "#",
|
1210 |
-
"第4步": "#",
|
1211 |
-
"找不到任何python文件": "#",
|
1212 |
-
"經過充分測試": "#",
|
1213 |
-
"缺少的依賴": "#",
|
1214 |
"分组+迭代处理": "分組+迭代處理",
|
1215 |
"安装Newbing的依赖": "安裝Newbing的依賴",
|
1216 |
"批": "批",
|
@@ -1511,5 +1458,821 @@
|
|
1511 |
"包括": "包括",
|
1512 |
"或者": "或者",
|
1513 |
"并执行函数的新版本": "並執行函數的新版本",
|
1514 |
-
"论文": "論文"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1515 |
}
|
|
|
150 |
"使用中文回答我的问题": "使用中文回答我的問題",
|
151 |
"备份一个文件": "備份一個文件",
|
152 |
"未知": "未知",
|
153 |
+
"其他錯誤": "其他錯誤",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
154 |
"等待NewBing响应": "等待NewBing回應",
|
155 |
"找不到任何CSharp文件": "找不到任何CSharp檔案",
|
156 |
"插件demo": "插件範例",
|
|
|
281 |
"上傳本地文件可供紅色函數插件調用": "上傳本地文件供紅色函數插件調用",
|
282 |
"生成圖像": "生成圖像",
|
283 |
"追加歷史": "追加歷史",
|
284 |
+
"網絡代理狀態": "網絡代理狀態",
|
285 |
"不需要再次轉化": "不需要再次轉換",
|
286 |
"帶超時倒計時": "帶有超時倒數計時",
|
287 |
"保存當前對話": "儲存目前對話",
|
288 |
"等待響應": "等待回應",
|
289 |
+
"依賴檢測通過": "依賴檢測通過",
|
290 |
"如果要使用ChatGLM": "如果要使用ChatGLM",
|
291 |
"對IPynb文件進行解析": "對IPynb檔案進行解析",
|
292 |
"先切換模型到openai或api2d": "先切換模型到openai或api2d",
|
|
|
392 |
"中转网址预览": "中轉網址預覽",
|
393 |
"自动截断": "自動截斷",
|
394 |
"当無法用標點、空行分割時": "當無法用標點、空行分割時",
|
395 |
+
"意外Json結構": "意外Json結構",
|
396 |
"需要讀取和清理文本的pdf文件路徑": "需要讀取和清理文本的pdf文件路徑",
|
397 |
"HotReload的裝飾器函數": "HotReload的裝飾器函數",
|
398 |
"chatGPT 分析報告": "chatGPT 分析報告",
|
|
|
404 |
"這個bug沒找到觸發條件": "這個bug沒找到觸發條件",
|
405 |
"喚起高級參數輸入區": "喚起高級參數輸入區",
|
406 |
"但大部分場合下並不需要修改": "但大部分場合下並不需要修改",
|
407 |
+
"盡量是完整的一個section": "盡量選擇完整的一個章節",
|
408 |
"如果OpenAI不響應": "如果OpenAI不響應",
|
409 |
"等文本特殊符號轉換為其基本形式來對文本進行歸一化處理": "等文本特殊符號轉換為其基本形式來對文本進行歸一化處理",
|
410 |
"你的回答必須簡單明了": "你的回答必須簡單明了",
|
|
|
498 |
"正在提取摘要並下載PDF文檔……": "正在提取摘要並下載PDF文件……",
|
499 |
"1. 對原始文本進行歸一化處理": "1. 正規化原始文本",
|
500 |
"問題": "問題",
|
501 |
+
"用於基礎的對話功能": "用於基礎的對話功能",
|
502 |
"獲取設置": "獲取設置",
|
503 |
"如果缺少依賴": "如果缺少依賴項",
|
504 |
"第6步": "第6步",
|
|
|
1092 |
"清理规则包括": "清理規則包括",
|
1093 |
"新版配置": "新版配置",
|
1094 |
"如果有": "如果有",
|
1095 |
+
"Call MOSS fail 不能正常加載MOSS的參數": "Call MOSS fail 不能正常加載MOSS的參數",
|
1096 |
+
"根據以上分析": "根據以上分析",
|
1097 |
+
"一些普通功能模塊": "一些普通功能模塊",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1098 |
"汇总报告如何远程获取": "如何遠程獲取匯總報告",
|
1099 |
"热更新prompt": "熱更新提示",
|
1100 |
"插件调度异常": "插件調度異常",
|
|
|
1155 |
"函数插件区": "函數插件區",
|
1156 |
"*** API_KEY 导入成功": "*** API_KEY 導入成功",
|
1157 |
"请对下面的程序文件做一个概述文件名是": "請對下面的程序文件做一個概述文件名是",
|
1158 |
+
"內容太長了都會觸發token數量溢出的錯誤": "內容太長了都會觸發token數量溢出的錯誤",
|
1159 |
+
"沒有提供高級參數功能說明": "未提供高級參數功能說明",
|
1160 |
+
"和openai的連接容易斷掉": "和openai的連接容易斷掉",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1161 |
"分组+迭代处理": "分組+迭代處理",
|
1162 |
"安装Newbing的依赖": "安裝Newbing的依賴",
|
1163 |
"批": "批",
|
|
|
1458 |
"包括": "包括",
|
1459 |
"或者": "或者",
|
1460 |
"并执行函数的新版本": "並執行函數的新版本",
|
1461 |
+
"论文": "論文",
|
1462 |
+
"解析一个Golang项目": "ParseAGolangProject",
|
1463 |
+
"Latex英文纠错": "LatexEnglishCorrection",
|
1464 |
+
"连接bing搜索回答问题": "ConnectToBingSearchForAnswer",
|
1465 |
+
"联网的ChatGPT_bing版": "ChatGPT_BingVersionOnline",
|
1466 |
+
"总结音视频": "SummarizeAudioAndVideo",
|
1467 |
+
"动画生成": "GenerateAnimations",
|
1468 |
+
"数学动画生成manim": "GenerateMathematicalAnimationsWithManim",
|
1469 |
+
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
1470 |
+
"知识库问答": "KnowledgeBaseQA",
|
1471 |
+
"Langchain知识库": "LangchainKnowledgeBase",
|
1472 |
+
"读取知识库作答": "ReadKnowledgeBaseAndAnswerQuestions",
|
1473 |
+
"交互功能模板函数": "InteractiveFunctionTemplateFunctions",
|
1474 |
+
"交互功能函数模板": "InteractiveFunctionFunctionTemplates",
|
1475 |
+
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
|
1476 |
+
"Latex输出PDF结果": "OutputPDFFromLatex",
|
1477 |
+
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
|
1478 |
+
"语音助手": "VoiceAssistant",
|
1479 |
+
"微调数据集生成": "FineTuneDatasetGeneration",
|
1480 |
+
"chatglm微调工具": "ChatGLM_FineTuningTool",
|
1481 |
+
"启动微调": "StartFineTuning",
|
1482 |
+
"sprint亮靛": "SprintLiangDian",
|
1483 |
+
"寻找Latex主文件": "FindLatexMainFile",
|
1484 |
+
"专业词汇声明": "ProfessionalTerminologyDeclaration",
|
1485 |
+
"Latex精细分解与转化": "LatexFineDecompositionAndConversion",
|
1486 |
+
"编译Latex": "CompileLatex",
|
1487 |
+
"正在等您说完问题": "正在等您說完問題",
|
1488 |
+
"最多同时执行5个": "最多同時執行5個",
|
1489 |
+
"将文件复制一份到下载区": "將檔案複製一份到下載區",
|
1490 |
+
"您接下来不能再使用其他插件了": "您接下來不能再使用其他插件了",
|
1491 |
+
"如 绿帽子*深蓝色衬衫*黑色运动裤": "如 綠帽子*深藍色襯衫*黑色運動褲",
|
1492 |
+
"首先你在中文语境下通读整篇论文": "首先您在中文語境下通讀整篇論文",
|
1493 |
+
"根据给定的切割时长将音频文件切割成多个片段": "根據給定的切割時長將音訊檔切割成多個片段",
|
1494 |
+
"接下来两句话只显示在界面上": "接下來兩句話只顯示在介面上",
|
1495 |
+
"清空label": "清空標籤",
|
1496 |
+
"正在尝试自动安装": "正在嘗試自動安裝",
|
1497 |
+
"MOSS消耗大量的内存": "MOSS消耗大量的記憶體",
|
1498 |
+
"如果这里报错": "如果這裡報錯",
|
1499 |
+
"其他类型文献转化效果未知": "其他類型文獻轉換效果未知",
|
1500 |
+
"ChatGPT综合": "ChatGPT綜合",
|
1501 |
+
"音频文件的路径": "音訊檔案的路徑",
|
1502 |
+
"执行错误": "執行錯誤",
|
1503 |
+
"因此选择GenerateImage函数": "因此選擇GenerateImage函數",
|
1504 |
+
"从摘要中提取高价值信息": "從摘要中提取高價值資訊",
|
1505 |
+
"使用英文": "使用英文",
|
1506 |
+
"是否在提交时自动清空输入框": "是否在提交時自動清空輸入框",
|
1507 |
+
"生成数学动画": "生成數學動畫",
|
1508 |
+
"正在加载Claude组件": "正在載入Claude元件",
|
1509 |
+
"参数说明": "參數說明",
|
1510 |
+
"建议排查": "建議排查",
|
1511 |
+
"将消耗较长时间下载中文向量化模型": "將消耗較長時間下載中文向量化模型",
|
1512 |
+
"test_LangchainKnowledgeBase读取": "test_LangchainKnowledgeBase讀取",
|
1513 |
+
"安装Claude的依赖": "安裝Claude的相依性",
|
1514 |
+
"以下所有配置也都支持利用环境变量覆写": "以下所有配置也都支持利用環境變數覆寫",
|
1515 |
+
"需要被切割的音频文件名": "需要被切割的音頻文件名",
|
1516 |
+
"保存当前对话": "保存當前對話",
|
1517 |
+
"功能、贡献者": "功能、貢獻者",
|
1518 |
+
"Chuanhu-Small-and-Beautiful主题": "Chuanhu-小而美主題",
|
1519 |
+
"等待Claude响应": "等待Claude響���",
|
1520 |
+
"其他模型转化效果未知": "其他模型轉換效果未知",
|
1521 |
+
"版权归原文作者所有": "版權歸原文作者所有",
|
1522 |
+
"回答完问题后": "回答完問題後",
|
1523 |
+
"请先上传文件素材": "請先上傳文件素材",
|
1524 |
+
"上传本地文件/压缩包供函数插件调用": "上傳本地文件/壓縮包供函數插件調用",
|
1525 |
+
"P.S. 顺便把Latex的注释去除": "P.S. 順便把Latex的註釋去除",
|
1526 |
+
"您提供的api-key不满足要求": "您提供的api-key不滿足要求",
|
1527 |
+
"切割音频文件": "切割音頻文件",
|
1528 |
+
"对不同latex源文件扣分": "對不同latex源文件扣分",
|
1529 |
+
"以下是一篇学术论文的基础信息": "以下是一篇學術論文的基礎信息",
|
1530 |
+
"问题": "問題",
|
1531 |
+
"待注入的知识库名称id": "待注入的知識庫名稱id",
|
1532 |
+
"”的主要内容": "”的主要內容",
|
1533 |
+
"获取设置": "獲取設置",
|
1534 |
+
"str类型": "str類型",
|
1535 |
+
"多线程": "多線程",
|
1536 |
+
"尝试执行Latex指令失败": "嘗試執行Latex指令失敗",
|
1537 |
+
"然后再写一段英文摘要": "然後再寫一段英文摘要",
|
1538 |
+
"段音频的主要内容": "段音頻的主要內容",
|
1539 |
+
"临时地激活代理网络": "臨時地激活代理網絡",
|
1540 |
+
"网络的远程文件": "網絡的遠程文件",
|
1541 |
+
"不能正常加载ChatGLMFT的参数!": "無法正常載入ChatGLMFT的參數!",
|
1542 |
+
"正在编译PDF文档": "正在編譯PDF文件",
|
1543 |
+
"等待ChatGLMFT响应中": "等待ChatGLMFT回應中",
|
1544 |
+
"将": "將",
|
1545 |
+
"片段": "片段",
|
1546 |
+
"修复括号": "修復括號",
|
1547 |
+
"条": "條",
|
1548 |
+
"建议直接在API_KEY处填写": "建議直接在API_KEY處填寫",
|
1549 |
+
"根据需要切换prompt": "根據需要切換prompt",
|
1550 |
+
"使用": "使用",
|
1551 |
+
"请输入要翻译成哪种语言": "請輸入要翻譯成哪種語言",
|
1552 |
+
"实际得到格式": "實際得到格式",
|
1553 |
+
"例如 f37f30e0f9934c34a992f6f64f7eba4f": "例如 f37f30e0f9934c34a992f6f64f7eba4f",
|
1554 |
+
"请切换至“KnowledgeBaseQA”插件进行知识库访问": "請切換至“KnowledgeBaseQA”插件進行知識庫訪問",
|
1555 |
+
"用户填3": "用戶填3",
|
1556 |
+
"远程云服务器部署": "遠程雲服務器部署",
|
1557 |
+
"未知指令": "未知指令",
|
1558 |
+
"每个线程都要“喂狗”": "每個線程都要“喂狗”",
|
1559 |
+
"该项目的Latex主文件是": "該項目的Latex主文件是",
|
1560 |
+
"设置OpenAI密钥和模型": "設置OpenAI密鑰和模型",
|
1561 |
+
"填入你亲手写的部署名": "填入你親手寫的部署名",
|
1562 |
+
"仅调试": "僅調試",
|
1563 |
+
"依赖不足": "依賴不足",
|
1564 |
+
"右下角更换模型菜单中可切换openai": "右下角更換模型菜單中可切換openai",
|
1565 |
+
"解析整个CSharp项目": "解析整個CSharp項目",
|
1566 |
+
"唤起高级参数输入区": "喚起高級參數輸入區",
|
1567 |
+
"这个bug没找到触发条件": "這個bug沒找到觸發條件",
|
1568 |
+
"========================================= 插件主程序2 =====================================================": "========================================= 插件主程序2 =====================================================",
|
1569 |
+
"经过充分测试": "經過充分測試",
|
1570 |
+
"该文件中主要包含三个函数": "該文件中主要包含三個函數",
|
1571 |
+
"您可以到Github Issue区": "您可以到Github Issue區",
|
1572 |
+
"避免线程阻塞": "避免線程阻塞",
|
1573 |
+
"吸收iffalse注释": "吸收iffalse註釋",
|
1574 |
+
"from crazy_functions.虚空终端 import 终端": "from crazy_functions.虛空終端 import 終端",
|
1575 |
+
"异步方法": "異步方法",
|
1576 |
+
"块元提取": "塊元提取",
|
1577 |
+
"Your account is not active. OpenAI以账户失效为由": "您的帳戶未啟用。OpenAI以帳戶失效為由",
|
1578 |
+
"还原部分原文": "還原部分原文",
|
1579 |
+
"如果要使用Claude": "如果要使用Claude",
|
1580 |
+
"把文件复制过去": "把文件複製過去",
|
1581 |
+
"解压失败! 需要安装pip install rarfile来解压rar文件": "解壓失敗!需要安裝pip install rarfile來解壓rar文件",
|
1582 |
+
"正在锁定插件": "正在鎖定插件",
|
1583 |
+
"输入 clear 以清空对话历史": "輸入 clear 以清空對話歷史",
|
1584 |
+
"P.S. 但愿没人把latex模板放在里面传进来": "P.S. 但願沒人把latex模板放在裡面傳進來",
|
1585 |
+
"实时音频采集": "實時音頻採集",
|
1586 |
+
"开始最终总结": "開始最終總結",
|
1587 |
+
"拒绝服务": "拒絕服務",
|
1588 |
+
"配置教程&视频教程": "配置教程&視頻教程",
|
1589 |
+
"所有音频都总结完成了吗": "所有音頻都總結完成了嗎",
|
1590 |
+
"返回": "返回",
|
1591 |
+
"避免不小心传github被别人看到": "避免不小心傳github被別人看到",
|
1592 |
+
"否则将导致每个人的Claude问询历史互相渗透": "否則將導致每個人的Claude問詢歷史互相滲透",
|
1593 |
+
"提问吧! 但注意": "提問吧!但注意",
|
1594 |
+
"待处理的word文档路径": "待處理的word文檔路徑",
|
1595 |
+
"欢迎加REAME中的QQ联系开发者": "歡迎加REAME中的QQ聯繫開發者",
|
1596 |
+
"建议暂时不要使用": "建議暫時不要使用",
|
1597 |
+
"Latex没有安��": "Latex沒有安裝",
|
1598 |
+
"在这里放一些网上搜集的demo": "在這裡放一些網上搜集的demo",
|
1599 |
+
"实现消息发送、接收等功能": "實現消息發送、接收等功能",
|
1600 |
+
"用于与with语句一起使用": "用於與with語句一起使用",
|
1601 |
+
"解压失败! 需要安装pip install py7zr来解压7z文件": "解壓失敗! 需要安裝pip install py7zr來解壓7z文件",
|
1602 |
+
"借助此参数": "借助此參數",
|
1603 |
+
"判定为数据流的结束": "判定為數據流的結束",
|
1604 |
+
"提取文件扩展名": "提取文件擴展名",
|
1605 |
+
"GPT结果已输出": "GPT結果已輸出",
|
1606 |
+
"读取文件": "讀取文件",
|
1607 |
+
"如果OpenAI不响应": "如果OpenAI不響應",
|
1608 |
+
"输入部分太自由": "輸入部分太自由",
|
1609 |
+
"用于给一小段代码上代理": "用於給一小段代碼上代理",
|
1610 |
+
"输入 stop 以终止对话": "輸入 stop 以終止對話",
|
1611 |
+
"这个paper有个input命令文件名大小写错误!": "這個paper有個input命令文件名大小寫錯誤!",
|
1612 |
+
"等待Claude回复的片段": "等待Claude回復的片段",
|
1613 |
+
"开始": "開始",
|
1614 |
+
"将根据报错信息修正tex源文件并重试": "將根據報錯信息修正tex源文件並重試",
|
1615 |
+
"建议更换代理协议": "建議更換代理協議",
|
1616 |
+
"递归地切割PDF文件": "遞歸地切割PDF文件",
|
1617 |
+
"读 docs\\use_azure.md": "讀 docs\\use_azure.md",
|
1618 |
+
"参数": "參數",
|
1619 |
+
"屏蔽空行和太短的句子": "屏蔽空行和太短的句子",
|
1620 |
+
"分析上述回答": "分析上述回答",
|
1621 |
+
"因为在同一个频道里存在多人使用时历史消息渗透问题": "因為在同一個頻道裡存在多人使用時歷史消息滲透問題",
|
1622 |
+
"使用latexdiff生成論文轉化前後對比": "使用latexdiff生成論文轉化前後對比",
|
1623 |
+
"檢查結果": "檢查結果",
|
1624 |
+
"請在此處追加更細緻的校錯指令": "請在此處追加更細緻的校錯指令",
|
1625 |
+
"報告如何遠程獲取": "報告如何遠程獲取",
|
1626 |
+
"發現已經存在翻譯好的PDF文檔": "發現已經存在翻譯好的PDF文檔",
|
1627 |
+
"插件鎖定中": "插件鎖定中",
|
1628 |
+
"正在精細切分latex文件": "正在精細切分latex文件",
|
1629 |
+
"數學GenerateAnimations": "數學GenerateAnimations",
|
1630 |
+
"上傳文件自動修正路徑": "上傳文件自動修正路徑",
|
1631 |
+
"請檢查ALIYUN_TOKEN和ALIYUN_APPKEY是否過期": "請檢查ALIYUN_TOKEN和ALIYUN_APPKEY是否過期",
|
1632 |
+
"上傳Latex項目": "上傳LaTeX項目",
|
1633 |
+
"Aliyun音頻服務異常": "Aliyun音頻服務異常",
|
1634 |
+
"為了防止大語言模型的意外謬誤產生擴散影響": "為了防止大語言模型的意外謬誤產生擴散影響",
|
1635 |
+
"調用Claude時": "調用Claude時",
|
1636 |
+
"解除插件鎖定": "解除插件鎖定",
|
1637 |
+
"暗色模式 / 亮色模式": "暗色模式 / 亮色模式",
|
1638 |
+
"只有第二步成功": "只有第二步成功",
|
1639 |
+
"分析结果": "分析結果",
|
1640 |
+
"用第二人称": "使用第二人稱",
|
1641 |
+
"详情见https": "詳情請見https",
|
1642 |
+
"记住当前的label": "記住當前的標籤",
|
1643 |
+
"当无法用标点、空行分割时": "當無法用標點符號、空行分割時",
|
1644 |
+
"如果分析错误": "如果分析錯誤",
|
1645 |
+
"如果有必要": "如果有必要",
|
1646 |
+
"不要修改!! 高危设置!通过修改此设置": "不要修改!! 高危設置!通過修改此設置",
|
1647 |
+
"ChatGLMFT消耗大量的内存": "ChatGLMFT消耗大量的內存",
|
1648 |
+
"摘要生成后的文档路径": "摘要生成後的文件路徑",
|
1649 |
+
"对全文进行概括": "對全文進行概述",
|
1650 |
+
"LLM_MODEL是默认选中的模型": "LLM_MODEL是默認選中的模型",
|
1651 |
+
"640个字节为一组": "640個字節為一組",
|
1652 |
+
"获取关键词": "獲取關鍵詞",
|
1653 |
+
"解析为简体中文": "解析為簡體中文",
|
1654 |
+
"将 \\include 命令转换为 \\input 命令": "將 \\include 命令轉換為 \\input 命令",
|
1655 |
+
"默认值为1000": "默認值為1000",
|
1656 |
+
"手动指定语言": "手動指定語言",
|
1657 |
+
"请登录OpenAI查看详情 https": "請登錄OpenAI查看詳情 https",
|
1658 |
+
"尝试第": "嘗試第",
|
1659 |
+
"每秒采样数量": "每秒採樣數量",
|
1660 |
+
"加载失败!": "加載失敗!",
|
1661 |
+
"方法": "方法",
|
1662 |
+
"对这个人外貌、身处的环境、内心世界、过去经历进行描写": "對這個人外貌、身處的環境、內心世界、過去經歷進行描寫",
|
1663 |
+
"请先将.doc文档转换为.docx文档": "請先將.doc文檔轉換為.docx文檔",
|
1664 |
+
"定位主Latex文件": "定位主Latex文件",
|
1665 |
+
"批量SummarizeAudioAndVideo": "批量摘要音视频",
|
1666 |
+
"终端": "終端",
|
1667 |
+
"即将退出": "即將退出",
|
1668 |
+
"找不到": "找不到",
|
1669 |
+
"正在听您讲话": "正在聆聽您講話",
|
1670 |
+
"请您不要删除或修改这行警告": "請勿刪除或修改此警告",
|
1671 |
+
"没有阿里云语音识别APPKEY和TOKEN": "沒有阿里雲語音識別APPKEY和TOKEN",
|
1672 |
+
"临时地启动代理网络": "臨時啟動代理網絡",
|
1673 |
+
"请尝试把以下指令复制到高级参数区": "請將以下指令複製到高級參數區",
|
1674 |
+
"中文Bing版": "中文Bing版",
|
1675 |
+
"计算文件总时长和切割点": "計算文件總時長和切割點",
|
1676 |
+
"寻找主文件": "尋找主文件",
|
1677 |
+
"jittorllms尚未加载": "jittorllms尚未加載",
|
1678 |
+
"使用正则表达式查找半行注释": "使用正則表達式查找半行註釋",
|
1679 |
+
"文档越长耗时越长": "文檔越長耗時越長",
|
1680 |
+
"生成中文PDF": "生成中文PDF",
|
1681 |
+
"写入文件": "寫入文件",
|
1682 |
+
"第三组插件": "第三組插件",
|
1683 |
+
"开始接收chatglmft的回复": "開始接收chatglmft的回覆",
|
1684 |
+
"由于提问含不合规内容被Azure过滤": "由於提問含不合規內容被Azure過濾",
|
1685 |
+
"安装方法https": "安裝方法https",
|
1686 |
+
"是否自动处理token溢出的情况": "是否自動處理token溢出的情況",
|
1687 |
+
"如果需要使用AZURE 详情请见额外文档 docs\\use_azure.md": "如果需要使用AZURE 詳情請見額外文檔 docs\\use_azure.md",
|
1688 |
+
"将要忽略匹配的文件后缀": "將要忽略匹配的文件後綴",
|
1689 |
+
"authors获取失败": "authors獲取失敗",
|
1690 |
+
"发送到openai音频解析终端": "發送到openai音頻解析終端",
|
1691 |
+
"请开始多线程操作": "請開始多線程操作",
|
1692 |
+
"对这个人外貌、身处的环境、内心世界、人设进行描写": "對這個人外貌、身處的環境、內心世界、人設進行描寫",
|
1693 |
+
"MOSS can understand and communicate fluently in the language chosen by the user such as English and 中文. MOSS can perform any language-based tasks.": "MOSS可以流利地理解和使用用戶選擇的語言,例如英語和中文。MOSS可以執行任何基於語言的任務。",
|
1694 |
+
"work_folder = Latex預處理": "設置工作目錄為Latex預處理",
|
1695 |
+
"然後轉移到指定的另一個路徑中": "然後轉移到指定的另一個路徑中",
|
1696 |
+
"使用Newbing": "使用Newbing",
|
1697 |
+
"詳情信息見requirements.txt": "詳細信息請參閱requirements.txt",
|
1698 |
+
"開始下載": "開始下載",
|
1699 |
+
"多線程翻譯開始": "多線程翻譯開始",
|
1700 |
+
"當前大語言模型": "當前大語言模型",
|
1701 |
+
"格式如org-123456789abcdefghijklmno的": "格式如org-123456789abcdefghijklmno的",
|
1702 |
+
"當下一次用戶提交時": "當下一次用戶提交時",
|
1703 |
+
"需要特殊依賴": "需要特殊依賴",
|
1704 |
+
"次編譯": "次編譯",
|
1705 |
+
"先上傳數據集": "先上傳數據集",
|
1706 |
+
"gpt寫的": "gpt寫的",
|
1707 |
+
"調用緩存": "調用緩存",
|
1708 |
+
"优先级1. 获取环境变量作为配置": "優先級1. 獲取環境變量作為配置",
|
1709 |
+
"检查config中的AVAIL_LLM_MODELS选项": "檢查config中的AVAIL_LLM_MODELS選項",
|
1710 |
+
"并且对于网络上的文件": "並且對於網絡上的文件",
|
1711 |
+
"根据文本使用GPT模型生成相应的图像": "根據文本使用GPT模型生成相應的圖像",
|
1712 |
+
"功能描述": "功能描述",
|
1713 |
+
"翻译结果": "翻譯結果",
|
1714 |
+
"需要预先pip install rarfile": "需要預先pip install rarfile",
|
1715 |
+
"等待响应": "等待響應",
|
1716 |
+
"我们剥离Introduction之后的部分": "我們剝離Introduction之後的部分",
|
1717 |
+
"函数插件-固定按钮区": "函數插件-固定按鈕區",
|
1718 |
+
"临时存储用于调试": "臨時存儲用於調試",
|
1719 |
+
"比正文字体小": "比正文字體小",
|
1720 |
+
"会直接转到该函数": "會直接轉到該函數",
|
1721 |
+
"请以以下方式load模型!!!": "請以以下方式load模型!!!",
|
1722 |
+
"请输入关键词": "請輸入關鍵詞",
|
1723 |
+
"返回找到的第一个": "返回找到的第一個",
|
1724 |
+
"高级参数输入区": "高級參數輸入區",
|
1725 |
+
"精细切分latex文件": "精細切分latex文件",
|
1726 |
+
"赋予插件锁定 锁定插件回调路径": "賦予插件鎖定 鎖定插件回調路徑",
|
1727 |
+
"尝试下载": "嘗試下載",
|
1728 |
+
"包含documentclass关键字": "包含documentclass關鍵字",
|
1729 |
+
"在一个异步线程中采集音频": "在一個異步線程中採集音頻",
|
1730 |
+
"先删除": "先刪除",
|
1731 |
+
"则跳过GPT请求环节": "則跳過GPT請求環節",
|
1732 |
+
"Not enough point. API2D账户点数不足": "Not enough point. API2D帳戶點數不足",
|
1733 |
+
"如果一句话小于7个字": "如果一句話小於7個字",
|
1734 |
+
"具备以下功能": "具備以下功能",
|
1735 |
+
"请查看终端的输出或耐心等待": "請查看終端的輸出或耐心等待",
|
1736 |
+
"对输入的word文档进行摘要生成": "對輸入的word文檔進行摘要生成",
|
1737 |
+
"只读": "只讀",
|
1738 |
+
"文本碎片重组为完整的tex文件": "文本碎片重組為完整的tex文件",
|
1739 |
+
"通过调用conversations_open方法打开一个频道": "通過調用conversations_open方法打開一個頻道",
|
1740 |
+
"对话历史文件损坏!": "對話歷史文件損壞!",
|
1741 |
+
"再失败就没办法了": "再失敗就沒辦法了",
|
1742 |
+
"原始PDF编译是否成功": "原始PDF編譯是否成功",
|
1743 |
+
"不能正常加载jittorllms的参数!": "不能正常加載jittorllms的參數!",
|
1744 |
+
"正在编译对比PDF": "正在編譯對比PDF",
|
1745 |
+
"找不到微调模型检查点": "找不到微調模型檢查點",
|
1746 |
+
"将生成的报告自动投射到文件上传区": "將生成的報告自動投射到文件上傳區",
|
1747 |
+
"请对这部分内容进行语法矫正": "請對這部分內容進行語法校正",
|
1748 |
+
"编译已经开始": "編譯已經開始",
|
1749 |
+
"需要读取和清理文本的pdf文件路径": "需要讀取和清理文本的pdf文件路徑",
|
1750 |
+
"读取文件内容到内存": "讀取文件內容到內存",
|
1751 |
+
"用&符号分隔": "用&符號分隔",
|
1752 |
+
"输入arxivID": "輸入arxivID",
|
1753 |
+
"找 API_ORG 设置项": "找API_ORG設置項",
|
1754 |
+
"分析用户提供的谷歌学术": "分析用戶提供的谷歌學術",
|
1755 |
+
"欢迎使用 MOSS 人工智能助手!输入内容即可进行对话": "歡迎使用 MOSS 人工智能助手!輸入內容即可進行對話",
|
1756 |
+
"段音频的第": "段音頻的第",
|
1757 |
+
"没有找到任何可读取文件": "沒有找到任何可讀取文件",
|
1758 |
+
"目前仅支持GPT3.5/GPT4": "目前僅支持GPT3.5/GPT4",
|
1759 |
+
"为每一位访问的用户赋予一个独一无二的uuid编码": "為每一位訪問的用戶賦予一個獨一無二的uuid編碼",
|
1760 |
+
"内含已经翻译的Tex文档": "內含已經翻譯的Tex文檔",
|
1761 |
+
"消耗时间的函数": "消耗時間的函數",
|
1762 |
+
"成功啦": "成功啦",
|
1763 |
+
"环境变量配置格式见docker-compose.yml": "環境變量配置格式見docker-compose.yml",
|
1764 |
+
"将每次对话记录写入Markdown格式的文件中": "將每次對話記錄寫入Markdown格式的文件中",
|
1765 |
+
"报告已经添加到右侧“文件上传区”": "報告已經添加到右側“文件上傳區”",
|
1766 |
+
"此处可以输入解析提示": "此處可以輸入解析提示",
|
1767 |
+
"缺少MOSS的依赖": "缺少MOSS的依賴",
|
1768 |
+
"仅在Windows系统进行了测试": "僅在Windows系統進行了測試",
|
1769 |
+
"然后重启程序": "然後重啟程序",
|
1770 |
+
"此处不修改": "此處不修改",
|
1771 |
+
"输出html调试文件": "輸出html調試文件",
|
1772 |
+
"6.25 加入判定latex模板的代码": "6.25 加入判定latex模板的代碼",
|
1773 |
+
"提取总结": "提取總結",
|
1774 |
+
"要求": "要求",
|
1775 |
+
"由于最为关键的转化PDF编译失败": "由於最為關鍵的轉化PDF編譯失敗",
|
1776 |
+
"除非您是论文的原作者": "除非您是論文的原作者",
|
1777 |
+
"输入问题后点击该插件": "輸入問題後點擊該插件",
|
1778 |
+
"该选项即将被弃用": "該選項即將被棄用",
|
1779 |
+
"再列出用户可能提出的三个问题": "再列出用戶可能提出的三個問題",
|
1780 |
+
"所有文件都总结完成了吗": "所有文件都總結完成了嗎",
|
1781 |
+
"请稍候": "請稍候",
|
1782 |
+
"向chatbot中添加简单的意外错误信息": "向chatbot中添加簡單的意外錯誤信息",
|
1783 |
+
"快捷的调试函数": "快捷的調試函數",
|
1784 |
+
"LatexEnglishCorrection+高亮修正位置": "Latex英文校正+高亮修正位置",
|
1785 |
+
"循环监听已打开频道的消息": "循環監聽已打開頻道的消息",
|
1786 |
+
"将指定目录下的PDF文件从英文翻译成中文": "將指定目錄下的PDF文件從英文翻譯成中文",
|
1787 |
+
"请对下面的音频片段做概述": "請對下面的音頻片段做概述",
|
1788 |
+
"openai的官方KEY需要伴隨组织编码": "openai的官方KEY需要伴隨組織編碼",
|
1789 |
+
"表示频道ID": "頻道ID",
|
1790 |
+
"当前支持的格式包括": "目前支援的格式包括",
|
1791 |
+
"只有GenerateImage和生成图像相关": "僅限GenerateImage和生成圖像相關",
|
1792 |
+
"删除中间文件夹": "刪除中間資料夾",
|
1793 |
+
"解除插件状态": "解除插件狀態",
|
1794 |
+
"正在预热文本向量化模组": "正在預熱文本向量化模組",
|
1795 |
+
"100字以内": "限制100字內",
|
1796 |
+
"如果缺少依赖": "如果缺少相依性",
|
1797 |
+
"寻找主tex文件": "尋找主要tex檔案",
|
1798 |
+
"gpt 多线程请求": "gpt 多線程請求",
|
1799 |
+
"已知某些代码的局部作用是": "已知某些程式碼的局部作用是",
|
1800 |
+
"--读取文件": "--讀取檔案",
|
1801 |
+
"前面是中文冒号": "前面是中文冒號",
|
1802 |
+
"*{\\scriptsize\\textbf{警告": "*{\\scriptsize\\textbf{警告",
|
1803 |
+
"OpenAI所允许的最大并行过载": "OpenAI所允許的最大並行過載",
|
1804 |
+
"请直接去该路径下取回翻译结果": "請直接前往該路徑取回翻譯結果",
|
1805 |
+
"以免输入溢出": "以免輸入溢出",
|
1806 |
+
"把某个路径下所有文件压缩": "壓縮某個路徑下的所有檔案",
|
1807 |
+
"问询记录": "詢問記錄",
|
1808 |
+
"Tex源文件缺失!": "Tex原始檔案遺失!",
|
1809 |
+
"当前参数": "目前參數",
|
1810 |
+
"处理markdown文本格式的转变": "處理markdown文本格式的轉換",
|
1811 |
+
"尝试加载": "嘗試載入",
|
1812 |
+
"请在此处给出自定义翻译命令": "請在此處提供自訂翻譯命令",
|
1813 |
+
"这需要一段时间计算": "這需要一段時間計算",
|
1814 |
+
"-构建知识库": "-建立知識庫",
|
1815 |
+
"还需要填写组织": "還需要填寫組織",
|
1816 |
+
"当前知识库内的有效文件": "當前知識庫內的有效文件",
|
1817 |
+
"第一次调用": "第一次調用",
|
1818 |
+
"从一批文件": "從一批文件",
|
1819 |
+
"json等": "json等",
|
1820 |
+
"翻译-": "翻譯-",
|
1821 |
+
"编译文献交叉引用": "編譯文獻交叉引用",
|
1822 |
+
"优先级2. 获取config_private中的配置": "優先級2. 獲取config_private中的配置",
|
1823 |
+
"可选": "可選",
|
1824 |
+
"我们": "我們",
|
1825 |
+
"编译结束": "編譯結束",
|
1826 |
+
"或代理节点": "或代理節點",
|
1827 |
+
"chatGPT 分析报告": "chatGPT 分析報告",
|
1828 |
+
"调用openai api 使用whisper-1模型": "調用openai api 使用whisper-1模型",
|
1829 |
+
"这段代码定义了一个名为TempProxy的空上下文管理器": "這段代碼定義了一個名為TempProxy的空上下文管理器",
|
1830 |
+
"生成的视频文件路径": "生成的視頻文件路徑",
|
1831 |
+
"请直接提交即可": "請直接提交即可",
|
1832 |
+
"=================================== 工具函数 ===============================================": "=================================== 工具函數 ===============================================",
|
1833 |
+
"报错信息如下. 如果是与网络相关的问题": "報錯信息如下. 如果是與網絡相關的問題",
|
1834 |
+
"python 版本建议3.9+": "python 版本建議3.9+",
|
1835 |
+
"多线程函数插件中": "多線程函數插件中",
|
1836 |
+
"对话助手函数插件": "對話助手函數插件",
|
1837 |
+
"或者重启之后再度尝试": "或者重啟之後再度嘗試",
|
1838 |
+
"拆分过长的latex片段": "拆分過長的latex片段",
|
1839 |
+
"调用whisper模型音频转文字": "調用whisper模型音頻轉文字",
|
1840 |
+
"失败啦": "失敗啦",
|
1841 |
+
"正在编译PDF": "正在編譯PDF",
|
1842 |
+
"请刷新界面重试": "請刷新界面重試",
|
1843 |
+
"模型参数": "模型參數",
|
1844 |
+
"写出文件": "寫出文件",
|
1845 |
+
"第二组插件": "第二組插件",
|
1846 |
+
"在多Tex文档中": "在多Tex文檔中",
|
1847 |
+
"有线程锁": "有線程鎖",
|
1848 |
+
"释放线程锁": "釋放線程鎖",
|
1849 |
+
"读取优先级": "讀取優先級",
|
1850 |
+
"Linux下必须使用Docker安装": "Linux下必須使用Docker安裝",
|
1851 |
+
"例如您可以将以下命令复制到下方": "例如您可以將以下命令複製到下方",
|
1852 |
+
"导入依赖失败": "導入依賴失敗",
|
1853 |
+
"给出一些判定模板文档的词作为扣分项": "給出一些判定模板文檔的詞作為扣分項",
|
1854 |
+
"等待Claude响应中": "等待Claude響應中",
|
1855 |
+
"Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数": "Call ChatGLMFT fail 不能正常加載ChatGLMFT的參數",
|
1856 |
+
"但本地存储了以下历史文件": "但本地存儲了以下歷史文件",
|
1857 |
+
"如果存在调试缓存文件": "如果存在調試緩存文件",
|
1858 |
+
"如果这里抛出异常": "如果這裡拋出異常",
|
1859 |
+
"详见项目主README.md": "詳見項目主README.md",
|
1860 |
+
"作者": "作者",
|
1861 |
+
"现在您点击任意“红颜色”标识的函数插件时": "現在您點擊任意“紅顏色”標識的函數插件時",
|
1862 |
+
"上下文管理器必须实现两个方法": "上下文管理器必須實現兩個方法",
|
1863 |
+
"匹配^数字^": "匹配^數字^",
|
1864 |
+
"也是可读的": "也是可讀的",
|
1865 |
+
"将音频解析为简体中文": "將音頻解析為簡體中文",
|
1866 |
+
"依次访问网页": "依次訪問網頁",
|
1867 |
+
"P.S. 顺便把CTEX塞进去以支持中文": "P.S. 順便把CTEX塞進去以支持中文",
|
1868 |
+
"NewBing响应异常": "NewBing響應異常",
|
1869 |
+
"获取已打开频道的最新消息并返回消息列表": "獲取已打開頻道的最新消息並返回消息列表",
|
1870 |
+
"请使用Markdown": "請使用Markdown",
|
1871 |
+
"例如 RoPlZrM88DnAFkZK": "例如 RoPlZrM88DnAFkZK",
|
1872 |
+
"编译BibTex": "編譯BibTex",
|
1873 |
+
"Claude失败": "Claude失敗",
|
1874 |
+
"请更换为API_URL_REDIRECT配置": "請更換為API_URL_REDIRECT配置",
|
1875 |
+
"P.S. 其他可用的模型还包括": "P.S. 其他可用的模型還包括",
|
1876 |
+
"色彩主体": "色彩主體",
|
1877 |
+
"后面是英文逗号": "後面是英文逗號",
|
1878 |
+
"下载pdf文件未成功": "下載pdf文件未成功",
|
1879 |
+
"删除整行的空注释": "刪除整行的空注釋",
|
1880 |
+
"吸收匿名公式": "吸收匿名公式",
|
1881 |
+
"从而更全面地理解项目的整体功能": "從而更全面地理解項目的整體功能",
|
1882 |
+
"不需要再次转化": "不需要再次轉化",
|
1883 |
+
"可以将自身的状态存储到cookie中": "可以將自身的狀態存儲到cookie中",
|
1884 |
+
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开": "1、英文題目;2、中文題目翻譯;3、作者;4、arxiv公開",
|
1885 |
+
"GPT 学术优化": "GPT 學術優化",
|
1886 |
+
"解析整个Python项目": "解析整個Python項目",
|
1887 |
+
"吸收其他杂项": "吸收其他雜項",
|
1888 |
+
"-预热文本向量化模组": "-預熱文本向量化模組",
|
1889 |
+
"Claude组件初始化成功": "Claude組件初始化成功",
|
1890 |
+
"此处填API密钥": "此處填API密鑰",
|
1891 |
+
"请继续分析其他源代码": "請繼續分析其他源代碼",
|
1892 |
+
"质能方程式": "質能方程式",
|
1893 |
+
"功能尚不稳定": "功能尚不穩定",
|
1894 |
+
"使用教程详情见 request_llm/README.md": "使用教程詳情見 request_llm/README.md",
|
1895 |
+
"从以上搜索结果中抽取信息": "從以上搜索結果中抽取信息",
|
1896 |
+
"虽然PDF生成失败了": "雖然PDF生成���敗了",
|
1897 |
+
"找图片": "尋找圖片",
|
1898 |
+
"还原原文": "還原原文",
|
1899 |
+
"可调节线程池的大小避免openai的流量限制错误": "可調整線程池大小以避免openai流量限制錯誤",
|
1900 |
+
"正在提取摘要并下载PDF文档……": "正在提取摘要並下載PDF文件......",
|
1901 |
+
"缺少ChatGLMFT的依赖": "缺少ChatGLMFT的依賴",
|
1902 |
+
"不会实时显示在界面上": "不會即時顯示在界面上",
|
1903 |
+
"解决部分词汇翻译不准确的问题": "解決部分詞彙翻譯不準確的問題",
|
1904 |
+
"等待多线程操作": "等待多線程操作",
|
1905 |
+
"吸收title与作者以上的部分": "吸收標題與作者以上的部分",
|
1906 |
+
"如果需要使用Slack Claude": "如果需要使用Slack Claude",
|
1907 |
+
"一、论文概况": "一、論文概況",
|
1908 |
+
"默认为Chinese": "默認為中文",
|
1909 |
+
"图像生成所用到的提示文本": "圖像生成所用到的提示文本",
|
1910 |
+
"向已打开的频道发送一条文本消息": "向已打開的頻道發送一條文本消息",
|
1911 |
+
"如果某个子任务出错": "如果某個子任務出錯",
|
1912 |
+
"chatglmft 没有 sys_prompt 接口": "chatglmft沒有sys_prompt接口",
|
1913 |
+
"对比PDF编译是否成功": "對比PDF編譯是否成功",
|
1914 |
+
"免费": "免費",
|
1915 |
+
"请讲话": "請講話",
|
1916 |
+
"安装ChatGLM的依赖": "安裝ChatGLM的依賴",
|
1917 |
+
"对IPynb文件进行解析": "對IPynb文件進行解析",
|
1918 |
+
"文件路径列表": "文件路徑列表",
|
1919 |
+
"或者使用此插件继续上传更多文件": "或者使用此插件繼續上傳更多文件",
|
1920 |
+
"随机负载均衡": "隨機負載均衡",
|
1921 |
+
"!!!如果需要运行量化版本": "!!!如果需要運行量化版本",
|
1922 |
+
"注意目前不能多人同时调用Claude接口": "注意目前不能多人同時調用Claude接口",
|
1923 |
+
"文件读取完成": "文件讀取完成",
|
1924 |
+
"用于灵活调整复杂功能的各种参数": "用於靈活調整複雜功能的各種參數",
|
1925 |
+
"**函数功能**": "**函數功能**",
|
1926 |
+
"先切换模型到openai或api2d": "先切換模型到openai或api2d",
|
1927 |
+
"You are associated with a deactivated account. OpenAI以账户失效为由": "您的帳戶已停用。OpenAI以帳戶失效為由",
|
1928 |
+
"你的回答必须简单明了": "您的回答必須簡單明了",
|
1929 |
+
"是否丢弃掉 不是正文的内容": "是否丟棄掉 不是正文的內容",
|
1930 |
+
"但请查收结果": "但請查收結果",
|
1931 |
+
"Claude响应缓慢": "Claude響應緩慢",
|
1932 |
+
"需Latex": "需Latex",
|
1933 |
+
"Claude回复的片段": "Claude回復的片段",
|
1934 |
+
"如果要使用ChatGLMFT": "如果要使用ChatGLMFT",
|
1935 |
+
"它*必须*被包含在AVAIL_LLM_MODELS列表中": "它*必須*被包含在AVAIL_LLM_MODELS列表中",
|
1936 |
+
"前面是中文逗号": "前面是中文逗號",
|
1937 |
+
"需要预先pip install py7zr": "需要預先pip install py7zr",
|
1938 |
+
"将前后断行符脱离": "將前後斷行符脫離",
|
1939 |
+
"防止丢失最后一条消息": "防止丟失最後一條消息",
|
1940 |
+
"初始化插件状态": "初始化插件狀態",
|
1941 |
+
"以秒为单位": "以秒為單位",
|
1942 |
+
"中文Latex项目全文润色": "中文Latex項目全文潤色",
|
1943 |
+
"对整个Latex项目进行纠错": "對整個Latex項目進行校對",
|
1944 |
+
"NEWBING_COOKIES未填写或有格式错误": "NEWBING_COOKIES未填寫或有格式錯誤",
|
1945 |
+
"函数插件作者": "函數插件作者",
|
1946 |
+
"结束": "結束",
|
1947 |
+
"追加历史": "追加歷史",
|
1948 |
+
"您需要首先调用构建知识库": "您需要首先調用構建知識庫",
|
1949 |
+
"如果程序停顿5分钟以上": "如果程序停頓5分鐘以上",
|
1950 |
+
"ChatGLMFT响应异常": "ChatGLMFT響應異常",
|
1951 |
+
"根据当前的模型类别": "根據當前的模型類別",
|
1952 |
+
"才能继续下面的步骤": "才能繼續下面的步驟",
|
1953 |
+
"并将返回的频道ID保存在属性CHANNEL_ID中": "並將返回的頻道ID保存在屬性CHANNEL_ID中",
|
1954 |
+
"请查收结果": "請查收結果",
|
1955 |
+
"解决插件锁定时的界面显示问题": "解決插件鎖定時的界面顯示問題",
|
1956 |
+
"待提取的知识库名称id": "待提取的知識庫名稱id",
|
1957 |
+
"Claude响应异常": "Claude響應異常",
|
1958 |
+
"当前代理可用性": "當前代理可用性",
|
1959 |
+
"代理网络配置": "代理網絡配置",
|
1960 |
+
"我将为您查找相关壁纸": "我將為您查找相關壁紙",
|
1961 |
+
"没给定指令": "沒給定指令",
|
1962 |
+
"音频内容是": "音頻內容是",
|
1963 |
+
"用该压缩包+ConversationHistoryArchive进行反馈": "用該壓縮包+ConversationHistoryArchive進行反饋",
|
1964 |
+
"总结音频": "總結音頻",
|
1965 |
+
"等待用户的再次调用": "等待用戶的再次調用",
|
1966 |
+
"永远给定None": "永遠給定None",
|
1967 |
+
"论文概况": "論文概況",
|
1968 |
+
"建议使用英文单词": "建議使用英文單詞",
|
1969 |
+
"刷新Gradio前端界面": "刷新Gradio前端界面",
|
1970 |
+
"列表递归接龙": "列表遞歸接龍",
|
1971 |
+
"赋予插件状态": "賦予插件狀態",
|
1972 |
+
"构建完成": "構建完成",
|
1973 |
+
"避免多用户干扰": "避免多用戶干擾",
|
1974 |
+
"当前工作路径为": "當前工作路徑為",
|
1975 |
+
"用黑色标注转换区": "用黑色標注轉換區",
|
1976 |
+
"压缩包": "壓縮包",
|
1977 |
+
"刷新页面即可以退出KnowledgeBaseQA模式": "刷新頁面即可以退出KnowledgeBaseQA模式",
|
1978 |
+
"拆分过长的Markdown文件": "拆分過長的Markdown文件",
|
1979 |
+
"生成时间戳": "生成時間戳",
|
1980 |
+
"尚未完成全部响应": "尚未完成全部響應",
|
1981 |
+
"HotReload的装饰器函数": "HotReload的裝飾器函數",
|
1982 |
+
"请务必用 pip install -r requirements.txt 指令安装依赖": "請務必用 pip install -r requirements.txt 指令安裝依賴",
|
1983 |
+
"TGUI不支持函数插件的实现": "TGUI不支持函數插件的實現",
|
1984 |
+
"音频文件名": "音頻文件名",
|
1985 |
+
"找不到任何音频或视频文件": "找不到任何音頻或視頻文件",
|
1986 |
+
"音频解析结果": "音頻解析結果",
|
1987 |
+
"如果使用ChatGLM2微调模型": "如果使用ChatGLM2微調模型",
|
1988 |
+
"限制的3/4时": "限制的3/4時",
|
1989 |
+
"获取回复": "獲取回復",
|
1990 |
+
"对话历史写入": "對話歷史寫入",
|
1991 |
+
"记录删除注释后的文本": "記錄刪除註釋後的文本",
|
1992 |
+
"整理结果为压缩包": "整理結果為壓縮包",
|
1993 |
+
"注意事项": "注意事項",
|
1994 |
+
"请耐心等待": "請耐心等待",
|
1995 |
+
"在执行完成之后": "在執行完成之後",
|
1996 |
+
"参数简单": "參數簡單",
|
1997 |
+
"Arixv论文精细翻译": "Arixv論文精細翻譯",
|
1998 |
+
"备份和下载": "備份和下載",
|
1999 |
+
"当前报错的latex代码处于第": "當前報錯的latex代碼處於第",
|
2000 |
+
"Markdown翻译": "Markdown翻譯",
|
2001 |
+
"英文Latex项目全文纠错": "英文Latex項目全文校對",
|
2002 |
+
"获取预处理函数": "獲取預處理函數",
|
2003 |
+
"add gpt task 创建子线程请求gpt": "add gpt task 創建子線程請求gpt",
|
2004 |
+
"一个包含所有切割音频片段文件路径的列表": "一個包含所有切割音頻片段文件路徑的列表",
|
2005 |
+
"解析arxiv网址失败": "解析arxiv網址失敗",
|
2006 |
+
"PDF文件所在的路径": "PDF文件所在路徑",
|
2007 |
+
"取评分最高者返回": "取評分最高者返回",
|
2008 |
+
"此插件处于开发阶段": "此插件處於開發階段",
|
2009 |
+
"如果已经存在": "如果已經存在",
|
2010 |
+
"或者不在环境变量PATH中": "或者不在環境變量PATH中",
|
2011 |
+
"目前支持的格式": "目前支持的格式",
|
2012 |
+
"将多文件tex工程融合为一个巨型tex": "將多文件tex工程融合為一個巨型tex",
|
2013 |
+
"暂不提交": "暫不提交",
|
2014 |
+
"调用函数": "調用函數",
|
2015 |
+
"编译转化后的PDF": "編譯轉化後的PDF",
|
2016 |
+
"将代码转为动画": "將代碼轉為動畫",
|
2017 |
+
"本地Latex论文精细翻译": "本地Latex論文精細翻譯",
|
2018 |
+
"删除或修改歧义文件": "刪除或修改歧義文件",
|
2019 |
+
"其他操作系统表现未知": "其他操作系統表現未知",
|
2020 |
+
"此插件Windows支持最佳": "此插件Windows支持最佳",
|
2021 |
+
"构建知识库": "構建知識庫",
|
2022 |
+
"每个切割音频片段的时长": "每個切割音頻片段的時長",
|
2023 |
+
"用latex编译为PDF对修正处做高亮": "用latex編譯為PDF對修正處做高亮",
|
2024 |
+
"行": "行",
|
2025 |
+
"= 2 通过一些Latex模板中常见": "= 2 通過一些Latex模板中常見",
|
2026 |
+
"如参考文献、脚注、图注等": "如參考文獻、腳註、圖註等",
|
2027 |
+
"期望格式例如": "期望格式例如",
|
2028 |
+
"翻译内容可靠性无保障": "翻譯內容可靠性無保障",
|
2029 |
+
"请用一句话概括这些文件的整体功能": "請用一句話概括這些文件的整體功能",
|
2030 |
+
"段音频完成了吗": "段音頻完成了嗎",
|
2031 |
+
"填入azure openai api的密钥": "填入azure openai api的密鑰",
|
2032 |
+
"文本碎片重组为完整的tex片段": "文本碎片重組為完整的tex片段",
|
2033 |
+
"吸收在42行以內的begin-end組合": "吸收在42行以內的begin-end組合",
|
2034 |
+
"屬性": "屬性",
|
2035 |
+
"必須包含documentclass": "必須包含documentclass",
|
2036 |
+
"等待GPT響應": "等待GPT響應",
|
2037 |
+
"當前語言模型溫度設定": "當前語言模型溫度設定",
|
2038 |
+
"模型選擇是": "選擇的模型為",
|
2039 |
+
"reverse 操作必須放在最後": "reverse 操作必須放在最後",
|
2040 |
+
"將子線程的gpt結果寫入chatbot": "將子線程的gpt結果寫入chatbot",
|
2041 |
+
"默認為default": "默認為default",
|
2042 |
+
"目前對機器學習類文獻轉化效果最好": "目前對機器學習類文獻轉化效果最好",
|
2043 |
+
"主程序即將開始": "主程序即將開始",
|
2044 |
+
"點擊“停止”鍵可終止程序": "點擊“停止”鍵可終止程序",
|
2045 |
+
"正在處理": "正在處理",
|
2046 |
+
"請立即終止程序": "請立即停止程序",
|
2047 |
+
"將 chatglm 直接對齊到 chatglm2": "將 chatglm 直接對齊到 chatglm2",
|
2048 |
+
"音頻助手": "音頻助手",
|
2049 |
+
"正在構建知識庫": "正在構建知識庫",
|
2050 |
+
"請向下翻": "請向下滾動頁面",
|
2051 |
+
"後面是英文冒號": "後面是英文冒號",
|
2052 |
+
"無法找到一個主Tex文件": "無法找到一個主Tex文件",
|
2053 |
+
"使用中文总结音频“": "使用中文總結音頻",
|
2054 |
+
"该PDF由GPT-Academic开源项目调用���语言模型+Latex翻译插件一键生成": "該PDF由GPT-Academic開源項目調用大語言模型+Latex翻譯插件一鍵生成",
|
2055 |
+
"开始生成动画": "開始生成動畫",
|
2056 |
+
"完成情况": "完成情況",
|
2057 |
+
"然后进行问答": "然後進行問答",
|
2058 |
+
"为啥chatgpt会把cite里面的逗号换成中文逗号呀": "為啥chatgpt會把cite裡面的逗號換成中文逗號呀",
|
2059 |
+
"暂时不支持历史消息": "暫時不支持歷史消息",
|
2060 |
+
"项目Github地址 \\url{https": "項目Github地址 \\url{https",
|
2061 |
+
"Newbing 请求失败": "Newbing 請求失敗",
|
2062 |
+
"根据自然语言执行插件命令": "根據自然語言執行插件命令",
|
2063 |
+
"迭代上一次的结果": "迭代上一次的結果",
|
2064 |
+
"azure和api2d请求源": "azure和api2d請求源",
|
2065 |
+
"格式如org-xxxxxxxxxxxxxxxxxxxxxxxx": "格式如org-xxxxxxxxxxxxxxxxxxxxxxxx",
|
2066 |
+
"推荐http": "推薦http",
|
2067 |
+
"将要匹配的模式": "將要匹配的模式",
|
2068 |
+
"代理数据解析失败": "代理數據解析失敗",
|
2069 |
+
"创建存储切割音频的文件夹": "創建存儲切割音頻的文件夾",
|
2070 |
+
"用红色标注处保留区": "用紅色標注處保留區",
|
2071 |
+
"至少一个线程任务Token溢出而失败": "至少一個線程任務Token溢出而失敗",
|
2072 |
+
"获取Slack消息失败": "獲取Slack消息失敗",
|
2073 |
+
"极少数情况下": "極少數情況下",
|
2074 |
+
"辅助gpt生成代码": "輔助gpt生成代碼",
|
2075 |
+
"生成图像": "生成圖像",
|
2076 |
+
"最多收纳多少个网页的结果": "最多收納多少個網頁的結果",
|
2077 |
+
"获取图片URL": "獲取圖片URL",
|
2078 |
+
"正常状态": "正常狀態",
|
2079 |
+
"编译原始PDF": "編譯原始PDF",
|
2080 |
+
"SummarizeAudioAndVideo内容": "音視頻摘要內容",
|
2081 |
+
"Latex文件融合完成": "Latex文件融合完成",
|
2082 |
+
"获取线程锁": "獲取線程鎖",
|
2083 |
+
"SlackClient类用于与Slack API进行交互": "SlackClient類用於與Slack API進行交互",
|
2084 |
+
"检测到arxiv文档连接": "檢測到arxiv文檔連接",
|
2085 |
+
"--读取参数": "--讀取參數",
|
2086 |
+
"如果您是论文原作者": "如果您是論文原作者",
|
2087 |
+
"5刀": "5美元",
|
2088 |
+
"转化PDF编译是否成功": "轉換PDF編譯是否成功",
|
2089 |
+
"生成带有段落标签的HTML代码": "生成帶有段落標籤的HTML代碼",
|
2090 |
+
"目前不支持历史消息查询": "目前不支持歷史消息查詢",
|
2091 |
+
"将文件添加到chatbot cookie中": "將文件添加到chatbot cookie中",
|
2092 |
+
"多线程操作已经开始": "多線程操作已經開始",
|
2093 |
+
"请求子进程": "請求子進程",
|
2094 |
+
"将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词": "將Unsplash API中的PUT_YOUR_QUERY_HERE替換成描述該事件的一個最重要的單詞",
|
2095 |
+
"不能加载Claude组件": "不能加載Claude組件",
|
2096 |
+
"请仔细鉴别并以原文为准": "請仔細鑒別並以原文為準",
|
2097 |
+
"否则结束循环": "否則結束循環",
|
2098 |
+
"插件可读取“输入区”文本/路径作为参数": "插件可讀取“輸入區”文本/路徑作為參數",
|
2099 |
+
"网络错误": "網絡錯誤",
|
2100 |
+
"想象一个穿着者": "想像一個穿著者",
|
2101 |
+
"避免遗忘导致死锁": "避免遺忘導致死鎖",
|
2102 |
+
"保证括号正确": "保證括號正確",
|
2103 |
+
"报错信息": "錯誤信息",
|
2104 |
+
"提取视频中的音频": "提取視頻中的音頻",
|
2105 |
+
"初始化音频采集线程": "初始化音頻採集線程",
|
2106 |
+
"参考文献转Bib": "參考文獻轉Bib",
|
2107 |
+
"阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https": "阿里云即時語音識別配置難度較高,僅建議高手用戶使用,參考 https",
|
2108 |
+
"使用时": "使用時",
|
2109 |
+
"处理个别特殊插件的锁定状态": "處理個別特殊插件的鎖定狀態",
|
2110 |
+
"但通常不会出现在正文": "但通常不會出現在正文",
|
2111 |
+
"此函数逐渐地搜索最长的条目进行剪辑": "此函數逐漸地搜索最長的條目進行剪輯",
|
2112 |
+
"给出指令": "給出指令",
|
2113 |
+
"读取音频文件": "讀取音頻文件",
|
2114 |
+
"========================================= 插件主程序1 =====================================================": "========================================= 插件主程序1 =====================================================",
|
2115 |
+
"带超时倒计时": "帶超時倒計時",
|
2116 |
+
"禁止移除或修改此警告": "禁止移除或修改此警告",
|
2117 |
+
"ChatGLMFT尚未加载": "ChatGLMFT尚未加載",
|
2118 |
+
"双手离开鼠标键盘吧": "雙手離開鼠標鍵盤吧",
|
2119 |
+
"缺少的依赖": "缺少的依賴",
|
2120 |
+
"的单词": "的單詞",
|
2121 |
+
"中读取数据构建知识库": "中讀取數據構建知識庫",
|
2122 |
+
"函数热更新是指在不停止程序运行的情况下": "函數熱更新是指在不停止程序運行的情況下",
|
2123 |
+
"建议低于1": "建議低於1",
|
2124 |
+
"转化PDF编译已经成功": "轉換PDF編譯已經成功",
|
2125 |
+
"出问题了": "出問題了",
|
2126 |
+
"欢迎使用 MOSS 人工智能助手!": "歡迎使用 MOSS 人工智能助手!",
|
2127 |
+
"正在精细切分latex文件": "���在精細切分LaTeX文件",
|
2128 |
+
"”补上": "”補上",
|
2129 |
+
"网络代理状态": "網路代理狀態",
|
2130 |
+
"依赖检测通过": "依賴檢測通過",
|
2131 |
+
"默认为default": "預設為default",
|
2132 |
+
"Call MOSS fail 不能正常加载MOSS的参数": "呼叫MOSS失敗,無法正常載入MOSS參數",
|
2133 |
+
"音频助手": "音頻助手",
|
2134 |
+
"次编译": "次編譯",
|
2135 |
+
"其他错误": "其他錯誤",
|
2136 |
+
"属性": "屬性",
|
2137 |
+
"主程序即将开始": "主程式即將開始",
|
2138 |
+
"Aliyun音频服务异常": "Aliyun音頻服務異常",
|
2139 |
+
"response中会携带traceback报错信息": "response中會攜帶traceback錯誤信息",
|
2140 |
+
"一些普通功能模块": "一些普通功能模組",
|
2141 |
+
"和openai的连接容易断掉": "和openai的連線容易斷掉",
|
2142 |
+
"请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期": "請檢查ALIYUN_TOKEN和ALIYUN_APPKEY是否過期",
|
2143 |
+
"调用Claude时": "呼叫Claude時",
|
2144 |
+
"插件锁定中": "插件鎖定中",
|
2145 |
+
"将子线程的gpt结果写入chatbot": "將子線程的gpt結果寫入chatbot",
|
2146 |
+
"当下一次用户提交时": "當下一次使用者提交時",
|
2147 |
+
"先上传数据集": "先上傳資料集",
|
2148 |
+
"请在此处追加更细致的矫错指令": "請在此處追加更細緻的矯錯指令",
|
2149 |
+
"无法找到一个主Tex文件": "無法找到一個主Tex文件",
|
2150 |
+
"gpt写的": "gpt寫的",
|
2151 |
+
"预处理": "預處理",
|
2152 |
+
"但大部分场合下并不需要修改": "但大部分場合下並不需要修改",
|
2153 |
+
"正在构建知识库": "正在建構知識庫",
|
2154 |
+
"开始请求": "開始請求",
|
2155 |
+
"根据以上分析": "根據以上分析",
|
2156 |
+
"需要特殊依赖": "需要特殊依賴",
|
2157 |
+
"用于基础的对话功能": "用於基礎的對話功能",
|
2158 |
+
"且没有代码段": "且沒有程式碼段",
|
2159 |
+
"取决于": "取決於",
|
2160 |
+
"openai的官方KEY需要伴隨組織編碼": "請填入組織編碼",
|
2161 |
+
"等待newbing回覆的片段": "等待newbing回覆的片段",
|
2162 |
+
"调用缓存": "呼叫快取",
|
2163 |
+
"模型选择是": "模型選擇為",
|
2164 |
+
"当前大语言模型": "當前大語言模型",
|
2165 |
+
"然后转移到指定的另一个路径中": "然後轉移到指定的另一個路徑中",
|
2166 |
+
"请向下翻": "請向下滾動",
|
2167 |
+
"内容太长了都会触发token数量溢出的错误": "內容太長會觸發token數量溢出的錯誤",
|
2168 |
+
"每一块": "每一塊",
|
2169 |
+
"详情信息见requirements.txt": "詳細信息見requirements.txt",
|
2170 |
+
"没有提供高级参数功能说明": "沒有提供高級參數功能說明",
|
2171 |
+
"上传Latex项目": "上傳Latex項目",
|
2172 |
+
"请立即终止程序": "請立即終止程式",
|
2173 |
+
"解除插件锁定": "解除插件鎖定",
|
2174 |
+
"意外Json结构": "意外Json結構",
|
2175 |
+
"必须包含documentclass": "必須包含documentclass",
|
2176 |
+
"10个文件为一组": "10個文件為一組",
|
2177 |
+
"openai的官方KEY需要伴随组织编码": "openai的官方KEY需要伴隨組織編碼",
|
2178 |
+
"重置文件的创建时间": "重置文件的創建時間",
|
2179 |
+
"尽量是完整的一个section": "盡量是完整的一個section",
|
2180 |
+
"报告如何远程获取": "報告如何遠程獲取",
|
2181 |
+
"work_folder = Latex预处理": "work_folder = Latex預處理",
|
2182 |
+
"吸收在42行以内的begin-end组合": "吸收在42行以內的begin-end組合",
|
2183 |
+
"后面是英文冒号": "後面是英文冒號",
|
2184 |
+
"使用latexdiff生成论文转化前后对比": "使用latexdiff生成論文轉化前後對比",
|
2185 |
+
"首先你在英文语境下通读整篇论文": "首先你在英文語境下通讀整篇論文",
|
2186 |
+
"为了防止大语言模型的意外谬误产生扩散影响": "為了防止大語言模型的意外謬誤產生擴散影響",
|
2187 |
+
"发现已经存在翻译好的PDF文档": "發現已經存在翻譯好的PDF文檔",
|
2188 |
+
"点击“停止”键可终止程序": "點擊“停止”鍵可終止程序",
|
2189 |
+
"数学GenerateAnimations": "數學GenerateAnimations",
|
2190 |
+
"随变按钮的回调函数注册": "隨變按鈕的回調函數註冊",
|
2191 |
+
"history至少释放二分之一": "history至少釋放二分之一",
|
2192 |
+
"当前语言模型温度设定": "當前語言模型溫度設定",
|
2193 |
+
"等待GPT响应": "等待GPT響應",
|
2194 |
+
"正在处理": "正在處理",
|
2195 |
+
"多线程翻译开始": "多線程翻譯開始",
|
2196 |
+
"reverse 操作必须放在最后": "reverse 操作必須放在最後",
|
2197 |
+
"等待newbing回复的片段": "等待newbing回覆的片段",
|
2198 |
+
"开始下载": "開始下載",
|
2199 |
+
"将 chatglm 直接对齐到 chatglm2": "將 chatglm 直接對齊到 chatglm2",
|
2200 |
+
"以上材料已经被写入": "以上材料已經被寫入",
|
2201 |
+
"上传文件自动修正路径": "上傳文件自動修正路徑",
|
2202 |
+
"然后请使用Markdown格式封装": "然後請使用Markdown格式封裝",
|
2203 |
+
"目前对机器学习类文献转化效果最好": "目前對機器學習類文獻轉化效果最好",
|
2204 |
+
"检查结果": "檢查結果",
|
2205 |
+
"、地址": "地址",
|
2206 |
+
"如.md": "如.md",
|
2207 |
+
"使用Unsplash API": "使用Unsplash API",
|
2208 |
+
"**输入参数说明**": "**輸入參��說明**",
|
2209 |
+
"新版本可用": "新版本可用",
|
2210 |
+
"找不到任何python文件": "找不到任何python文件",
|
2211 |
+
"知乎": "知乎",
|
2212 |
+
"日": "日",
|
2213 |
+
"“喂狗”": "“喂狗”",
|
2214 |
+
"第4步": "第4步",
|
2215 |
+
"退出": "退出",
|
2216 |
+
"使用 Unsplash API": "使用 Unsplash API",
|
2217 |
+
"非Openai官方接口返回了错误": "非Openai官方接口返回了错误",
|
2218 |
+
"用来描述你的要求": "用來描述你的要求",
|
2219 |
+
"自定义API KEY格式": "自定義API KEY格式",
|
2220 |
+
"前缀": "前綴",
|
2221 |
+
"会被加在你的输入之前": "會被加在你的輸入之前",
|
2222 |
+
"api2d等请求源": "api2d等請求源",
|
2223 |
+
"高危设置! 常规情况下不要修改! 通过修改此设置": "高危設置!常規情況下不要修改!通過修改此設置",
|
2224 |
+
"即将编译PDF": "即將編譯PDF",
|
2225 |
+
"默认 secondary": "默認 secondary",
|
2226 |
+
"正在从github下载资源": "正在從github下載資源",
|
2227 |
+
"响应异常": "響應異常",
|
2228 |
+
"我好!": "我好!",
|
2229 |
+
"无需填写": "無需填寫",
|
2230 |
+
"缺少": "缺少",
|
2231 |
+
"请问什么是质子": "請問什麼是質子",
|
2232 |
+
"如果要使用": "如果要使用",
|
2233 |
+
"重组": "重組",
|
2234 |
+
"一个单实例装饰器": "一個單實例裝飾器",
|
2235 |
+
"的参数!": "的參數!",
|
2236 |
+
"🏃♂️🏃♂️🏃♂️ 子进程执行": "🏃♂️🏃♂️🏃♂️ 子進程執行",
|
2237 |
+
"失败时": "失敗時",
|
2238 |
+
"没有设置ANTHROPIC_API_KEY选项": "沒有設置ANTHROPIC_API_KEY選項",
|
2239 |
+
"并设置参数": "並設置參數",
|
2240 |
+
"格式": "格式",
|
2241 |
+
"按钮是否可见": "按鈕是否可見",
|
2242 |
+
"即可见": "即可見",
|
2243 |
+
"创建request": "創建request",
|
2244 |
+
"的依赖": "的依賴",
|
2245 |
+
"⭐主进程执行": "⭐主進程執行",
|
2246 |
+
"最后一步处理": "最後一步處理",
|
2247 |
+
"没有设置ANTHROPIC_API_KEY": "沒有設置ANTHROPIC_API_KEY",
|
2248 |
+
"的参数": "的參數",
|
2249 |
+
"逆转出错的段落": "逆轉出錯的段落",
|
2250 |
+
"本项目现已支持OpenAI和Azure的api-key": "本項目現已支持OpenAI和Azure的api-key",
|
2251 |
+
"前者是API2D的结束条件": "前者是API2D的結束條件",
|
2252 |
+
"增强稳健性": "增強穩健性",
|
2253 |
+
"消耗大量的内存": "消耗大量的內存",
|
2254 |
+
"您的 API_KEY 不满足任何一种已知的密钥格式": "您的API_KEY不滿足任何一種已知的密鑰格式",
|
2255 |
+
"⭐单线程方法": "⭐單線程方法",
|
2256 |
+
"是否在触发时清除历史": "是否在觸發時清除歷史",
|
2257 |
+
"⭐多线程方法": "多線程方法",
|
2258 |
+
"不能正常加载": "無法正常加載",
|
2259 |
+
"举例": "舉例",
|
2260 |
+
"即不处理之前的对话历史": "即不處理之前的對話歷史",
|
2261 |
+
"尚未加载": "尚未加載",
|
2262 |
+
"防止proxies单独起作用": "防止proxies單獨起作用",
|
2263 |
+
"默认 False": "默認 False",
|
2264 |
+
"检查USE_PROXY": "檢查USE_PROXY",
|
2265 |
+
"响应中": "響應中",
|
2266 |
+
"扭转的范围": "扭轉的範圍",
|
2267 |
+
"后缀": "後綴",
|
2268 |
+
"调用": "調用",
|
2269 |
+
"创建AcsClient实例": "創建AcsClient實例",
|
2270 |
+
"安装": "安裝",
|
2271 |
+
"会被加在你的输入之后": "會被加在你的輸入之後",
|
2272 |
+
"配合前缀可以把你的输入内容用引号圈起来": "配合前綴可以把你的輸入內容用引號圈起來",
|
2273 |
+
"例如翻译、解释代码、润色等等": "例如翻譯、解釋代碼、潤色等等",
|
2274 |
+
"后者是OPENAI的结束条件": "後者是OPENAI的結束條件",
|
2275 |
+
"标注节点的行数范围": "標註節點的行數範圍",
|
2276 |
+
"默认 True": "默認 True",
|
2277 |
+
"将两个PDF拼接": "將兩個PDF拼接"
|
2278 |
}
|
docs/use_audio.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 使用音频交互功能
|
2 |
+
|
3 |
+
|
4 |
+
## 1. 安装额外依赖
|
5 |
+
```
|
6 |
+
pip install --upgrade pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
7 |
+
```
|
8 |
+
|
9 |
+
如果因为特色网络问题导致上述命令无法执行:
|
10 |
+
1. git clone alibabacloud-nls-python-sdk这个项目(或者直接前往Github对应网址下载压缩包).
|
11 |
+
命令行输入: `git clone https://github.com/aliyun/alibabacloud-nls-python-sdk.git`
|
12 |
+
1. 进入alibabacloud-nls-python-sdk目录命令行输入:`python setup.py install`
|
13 |
+
|
14 |
+
|
15 |
+
## 2. 配置音频功能开关 和 阿里云APPKEY(config.py/config_private.py/环境变量)
|
16 |
+
|
17 |
+
- 注册阿里云账号
|
18 |
+
- 开通 智能语音交互 (有免费白嫖时长)
|
19 |
+
- 获取token和appkey
|
20 |
+
- 未来将逐步用其他更廉价的云服务取代阿里云
|
21 |
+
|
22 |
+
```
|
23 |
+
ENABLE_AUDIO = True
|
24 |
+
ALIYUN_TOKEN = "554a50fcd0bb476c8d07bb630e94d20c" # 此token已经失效
|
25 |
+
ALIYUN_APPKEY = "RoPlZrM88DnAFkZK" # 此appkey已经失效
|
26 |
+
```
|
27 |
+
|
28 |
+
参考 https://help.aliyun.com/document_detail/450255.html
|
29 |
+
先有阿里云开发者账号,登录之后,需要开通 智能语音交互 的功能,可以免费获得一个token,然后在 全部项目 中,创建一个项目,可以获得一个appkey.
|
30 |
+
|
31 |
+
- 进阶功能
|
32 |
+
进一步填写ALIYUN_ACCESSKEY和ALIYUN_SECRET实现自动获取ALIYUN_TOKEN
|
33 |
+
```
|
34 |
+
ALIYUN_APPKEY = "RoP1ZrM84DnAFkZK"
|
35 |
+
ALIYUN_TOKEN = ""
|
36 |
+
ALIYUN_ACCESSKEY = "LTAI5q6BrFUzoRXVGUWnekh1"
|
37 |
+
ALIYUN_SECRET = "eHmI20AVWIaQZ0CiTD2bGQVsaP9i68"
|
38 |
+
```
|
39 |
+
|
40 |
+
|
41 |
+
## 3.启动
|
42 |
+
|
43 |
+
启动gpt-academic `python main.py`
|
44 |
+
|
45 |
+
## 4.点击record from microphe,授权音频采集
|
46 |
+
|
47 |
+
I 如果需要监听自己说话(不监听电脑音频),直接在浏览器中选择对应的麦即可
|
48 |
+
|
49 |
+
II 如果需要监听电脑音频(不监听自己说话),需要安装`VB-Audio VoiceMeeter`,打开声音控制面板(sound control panel)
|
50 |
+
- 1 `[把电脑的所有外放声音用VoiceMeeter截留]` 在输出区(playback)选项卡,把VoiceMeeter Input虚拟设备set as default设为默认播放设备。
|
51 |
+
- 2 `[把截留的声音释放到gpt-academic]` 打开gpt-academic主界面,授权音频采集后,在浏览器地址栏或者类似的地方会出现一个麦克风图标,打开后,按照浏览器的提示,选择VoiceMeeter虚拟麦克风。然后刷新页面,重新授权音频采集。
|
52 |
+
- 3 `[把截留的声音同时释放到耳机或音响]` 完成第一步之后,您应处于听不到电脑声音的状态。为了在截获音频的同时,避免影响正常使用,请完成这最后一步配置。在声音控制面板(sound control panel)输入区(recording)选项卡,把VoiceMeeter Output虚拟设备set as default。双击进入VoiceMeeter Output虚拟设备的设置。
|
53 |
+
- 3-1 进入VoiceMeeter Output虚拟设备子菜单,打开listen选项卡。
|
54 |
+
- 3-2 勾选Listen to this device。
|
55 |
+
- 3-3 在playback through this device下拉菜单中选择你的正常耳机或音响。
|
56 |
+
|
57 |
+
III `[把特殊软件(如腾讯会议)的外放声音用VoiceMeeter截留]` 在完成步骤II的基础上,在特殊软件(如腾讯会议)中,打开声音菜单,选择扬声器VoiceMeeter Input,选择麦克风为正常耳机麦。
|
58 |
+
|
59 |
+
VI 两种音频监听模式切换时,需要刷新页面才有效。
|
60 |
+
|
61 |
+
VII 非localhost运行+非https情况下无法打开录音功能的坑:https://blog.csdn.net/weixin_39461487/article/details/109594434
|
62 |
+
|
63 |
+
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能
|
64 |
+
|
docs/use_azure.md
CHANGED
@@ -90,62 +90,29 @@
|
|
90 |
|
91 |
到现在为止,申请操作就完成了,需要记下来的有下面几个东西:
|
92 |
|
93 |
-
●
|
94 |
|
95 |
-
● 终结点
|
|
|
|
|
96 |
|
97 |
-
● 部署名(不是模型名)
|
98 |
|
99 |
# 修改 config.py
|
100 |
|
101 |
```
|
102 |
-
|
|
|
|
|
103 |
AZURE_API_KEY = "填入azure openai api的密钥"
|
104 |
AZURE_API_VERSION = "2023-05-15" # 默认使用 2023-05-15 版本,无需修改
|
105 |
-
AZURE_ENGINE = "填入部署名"
|
106 |
-
|
107 |
-
```
|
108 |
-
# API的使用
|
109 |
-
|
110 |
-
接下来就是具体怎么使用API了,还是可以参考官方文档:[快速入门 - 开始通过 Azure OpenAI 服务使用 ChatGPT 和 GPT-4 - Azure OpenAI Service | Microsoft Learn](https://learn.microsoft.com/zh-cn/azure/cognitive-services/openai/chatgpt-quickstart?pivots=programming-language-python)
|
111 |
-
|
112 |
-
和openai自己的api调用有点类似,都需要安装openai库,不同的是调用方式
|
113 |
|
114 |
```
|
115 |
-
import openai
|
116 |
-
openai.api_type = "azure" #固定格式,无需修改
|
117 |
-
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") #这里填入“终结点”
|
118 |
-
openai.api_version = "2023-05-15" #固定格式,无需修改
|
119 |
-
openai.api_key = os.getenv("AZURE_OPENAI_KEY") #这里填入“密钥1”或“密钥2”
|
120 |
-
|
121 |
-
response = openai.ChatCompletion.create(
|
122 |
-
engine="gpt-35-turbo", #这里填入的不是模型名,是部署名
|
123 |
-
messages=[
|
124 |
-
{"role": "system", "content": "You are a helpful assistant."},
|
125 |
-
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
|
126 |
-
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
|
127 |
-
{"role": "user", "content": "Do other Azure Cognitive Services support this too?"}
|
128 |
-
]
|
129 |
-
)
|
130 |
-
|
131 |
-
print(response)
|
132 |
-
print(response['choices'][0]['message']['content'])
|
133 |
|
134 |
-
```
|
135 |
-
|
136 |
-
需要注意的是:
|
137 |
-
|
138 |
-
1. engine那里填入的是部署名,不是模型名
|
139 |
-
|
140 |
-
2. 通过openai库获得的这个 response 和通过 request 库访问 url 获得的 response 不同,不需要 decode,已经是解析好的 json 了,直接根据键值读取即可。
|
141 |
-
|
142 |
-
更细节的使用方法,详见官方API文档。
|
143 |
|
144 |
# 关于费用
|
145 |
|
146 |
-
Azure OpenAI API 还是需要一些费用的(免费订阅只有1
|
147 |
-
|
148 |
-
![image.png](https://note.youdao.com/yws/res/18095/WEBRESOURCEeba0ab6d3127b79e143ef2d5627c0e44)
|
149 |
|
150 |
具体可以可以看这个网址 :[Azure OpenAI 服务 - 定价| Microsoft Azure](https://azure.microsoft.com/zh-cn/pricing/details/cognitive-services/openai-service/?cdn=disable)
|
151 |
|
|
|
90 |
|
91 |
到现在为止,申请操作就完成了,需要记下来的有下面几个东西:
|
92 |
|
93 |
+
● 密钥(对应AZURE_API_KEY,1或2都可以)
|
94 |
|
95 |
+
● 终结点 (对应AZURE_ENDPOINT)
|
96 |
+
|
97 |
+
● 部署名(对应AZURE_ENGINE,不是模型名)
|
98 |
|
|
|
99 |
|
100 |
# 修改 config.py
|
101 |
|
102 |
```
|
103 |
+
LLM_MODEL = "azure-gpt-3.5" # 指定启动时的默认模型,当然事后从下拉菜单选也ok
|
104 |
+
|
105 |
+
AZURE_ENDPOINT = "填入终结点" # 见上述图片
|
106 |
AZURE_API_KEY = "填入azure openai api的密钥"
|
107 |
AZURE_API_VERSION = "2023-05-15" # 默认使用 2023-05-15 版本,无需修改
|
108 |
+
AZURE_ENGINE = "填入部署名" # 见上述图片
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
109 |
|
110 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
111 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
112 |
|
113 |
# 关于费用
|
114 |
|
115 |
+
Azure OpenAI API 还是需要一些费用的(免费订阅只有1个月有效期)
|
|
|
|
|
116 |
|
117 |
具体可以可以看这个网址 :[Azure OpenAI 服务 - 定价| Microsoft Azure](https://azure.microsoft.com/zh-cn/pricing/details/cognitive-services/openai-service/?cdn=disable)
|
118 |
|
multi_language.py
CHANGED
@@ -3,16 +3,18 @@
|
|
3 |
|
4 |
|
5 |
Usage:
|
6 |
-
1. modify
|
|
|
|
|
7 |
LANG = "English"
|
8 |
|
9 |
-
|
10 |
TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #."
|
11 |
|
12 |
-
|
13 |
Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes.
|
14 |
|
15 |
-
|
16 |
|
17 |
P.S.
|
18 |
|
@@ -33,7 +35,7 @@ import pickle
|
|
33 |
import time
|
34 |
|
35 |
CACHE_FOLDER = "gpt_log"
|
36 |
-
blacklist = ['multi-language', 'gpt_log', '.git', 'private_upload', 'multi_language.py']
|
37 |
|
38 |
# LANG = "TraditionalChinese"
|
39 |
# TransPrompt = f"Replace each json value `#` with translated results in Traditional Chinese, e.g., \"原始文本\":\"翻譯後文字\". Keep Json format. Do not answer #."
|
@@ -286,6 +288,7 @@ def trans_json(word_to_translate, language, special=False):
|
|
286 |
|
287 |
|
288 |
def step_1_core_key_translate():
|
|
|
289 |
def extract_chinese_characters(file_path):
|
290 |
syntax = []
|
291 |
with open(file_path, 'r', encoding='utf-8') as f:
|
@@ -301,6 +304,7 @@ def step_1_core_key_translate():
|
|
301 |
elif isinstance(node, ast.ImportFrom):
|
302 |
for n in node.names:
|
303 |
if contains_chinese(n.name): syntax.append(n.name)
|
|
|
304 |
for k in node.module.split('.'):
|
305 |
if contains_chinese(k): syntax.append(k)
|
306 |
return syntax
|
@@ -310,6 +314,7 @@ def step_1_core_key_translate():
|
|
310 |
for root, dirs, files in os.walk(directory_path):
|
311 |
if any([b in root for b in blacklist]):
|
312 |
continue
|
|
|
313 |
for file in files:
|
314 |
if file.endswith('.py'):
|
315 |
file_path = os.path.join(root, file)
|
@@ -323,15 +328,15 @@ def step_1_core_key_translate():
|
|
323 |
for d in chinese_core_keys:
|
324 |
if d not in chinese_core_keys_norepeat: chinese_core_keys_norepeat.append(d)
|
325 |
need_translate = []
|
326 |
-
cached_translation = read_map_from_json(language=
|
327 |
cached_translation_keys = list(cached_translation.keys())
|
328 |
for d in chinese_core_keys_norepeat:
|
329 |
if d not in cached_translation_keys:
|
330 |
need_translate.append(d)
|
331 |
|
332 |
-
need_translate_mapping = trans(need_translate, language=
|
333 |
-
map_to_json(need_translate_mapping, language=
|
334 |
-
cached_translation = read_map_from_json(language=
|
335 |
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
|
336 |
|
337 |
chinese_core_keys_norepeat_mapping = {}
|
@@ -505,6 +510,6 @@ def step_2_core_key_translate():
|
|
505 |
with open(file_path_new, 'w', encoding='utf-8') as f:
|
506 |
f.write(content)
|
507 |
os.remove(file_path)
|
508 |
-
|
509 |
step_1_core_key_translate()
|
510 |
step_2_core_key_translate()
|
|
|
|
3 |
|
4 |
|
5 |
Usage:
|
6 |
+
1. modify config.py, set your LLM_MODEL and API_KEY(s) to provide access to OPENAI (or any other LLM model provider)
|
7 |
+
|
8 |
+
2. modify LANG (below ↓)
|
9 |
LANG = "English"
|
10 |
|
11 |
+
3. modify TransPrompt (below ↓)
|
12 |
TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #."
|
13 |
|
14 |
+
4. Run `python multi_language.py`.
|
15 |
Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes.
|
16 |
|
17 |
+
5. Find the translated program in `multi-language\English\*`
|
18 |
|
19 |
P.S.
|
20 |
|
|
|
35 |
import time
|
36 |
|
37 |
CACHE_FOLDER = "gpt_log"
|
38 |
+
blacklist = ['multi-language', 'gpt_log', '.git', 'private_upload', 'multi_language.py', 'build', '.github', '.vscode', '__pycache__', 'venv']
|
39 |
|
40 |
# LANG = "TraditionalChinese"
|
41 |
# TransPrompt = f"Replace each json value `#` with translated results in Traditional Chinese, e.g., \"原始文本\":\"翻譯後文字\". Keep Json format. Do not answer #."
|
|
|
288 |
|
289 |
|
290 |
def step_1_core_key_translate():
|
291 |
+
LANG_STD = 'std'
|
292 |
def extract_chinese_characters(file_path):
|
293 |
syntax = []
|
294 |
with open(file_path, 'r', encoding='utf-8') as f:
|
|
|
304 |
elif isinstance(node, ast.ImportFrom):
|
305 |
for n in node.names:
|
306 |
if contains_chinese(n.name): syntax.append(n.name)
|
307 |
+
# if node.module is None: print(node.module)
|
308 |
for k in node.module.split('.'):
|
309 |
if contains_chinese(k): syntax.append(k)
|
310 |
return syntax
|
|
|
314 |
for root, dirs, files in os.walk(directory_path):
|
315 |
if any([b in root for b in blacklist]):
|
316 |
continue
|
317 |
+
print(files)
|
318 |
for file in files:
|
319 |
if file.endswith('.py'):
|
320 |
file_path = os.path.join(root, file)
|
|
|
328 |
for d in chinese_core_keys:
|
329 |
if d not in chinese_core_keys_norepeat: chinese_core_keys_norepeat.append(d)
|
330 |
need_translate = []
|
331 |
+
cached_translation = read_map_from_json(language=LANG_STD)
|
332 |
cached_translation_keys = list(cached_translation.keys())
|
333 |
for d in chinese_core_keys_norepeat:
|
334 |
if d not in cached_translation_keys:
|
335 |
need_translate.append(d)
|
336 |
|
337 |
+
need_translate_mapping = trans(need_translate, language=LANG_STD, special=True)
|
338 |
+
map_to_json(need_translate_mapping, language=LANG_STD)
|
339 |
+
cached_translation = read_map_from_json(language=LANG_STD)
|
340 |
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
|
341 |
|
342 |
chinese_core_keys_norepeat_mapping = {}
|
|
|
510 |
with open(file_path_new, 'w', encoding='utf-8') as f:
|
511 |
f.write(content)
|
512 |
os.remove(file_path)
|
|
|
513 |
step_1_core_key_translate()
|
514 |
step_2_core_key_translate()
|
515 |
+
print('Finished, checkout generated results at ./multi-language/')
|