Spaces:
Runtime error
Runtime error
bug fixes
Browse files- README.md +40 -33
- check_proxy.py +6 -9
- config.py +27 -7
- crazy_functions/Latex输出PDF结果.py +1 -1
- crazy_functions/crazy_utils.py +3 -3
- crazy_functions/latex_fns/latex_toolbox.py +2 -2
- docs/waifu_plugin/waifu-tips.js +1 -33
- docs/waifu_plugin/waifu-tips.json +2 -4
- flagged/modeling_moss.py +0 -0
- request_llms/README.md +19 -63
- request_llms/bridge_all.py +16 -0
- request_llms/bridge_deepseekcoder.py +88 -0
- request_llms/bridge_llama2.py +2 -2
- request_llms/bridge_qwen.py +2 -2
- request_llms/bridge_zhipu.py +9 -0
- request_llms/local_llm_class.py +1 -1
- tests/test_llms.py +2 -1
- themes/contrast.py +8 -4
- themes/default.py +4 -2
- themes/gradios.py +8 -7
- themes/green.py +11 -8
- version +2 -2
README.md
CHANGED
@@ -40,7 +40,7 @@ To translate this project to arbitrary language with GPT, read and run [`multi_l
|
|
40 |
|
41 |
功能(⭐= 近期新增功能) | 描述
|
42 |
--- | ---
|
43 |
-
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, [
|
44 |
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
45 |
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
46 |
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
@@ -104,36 +104,38 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|
104 |
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
105 |
|
106 |
1. 下载项目
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
|
|
111 |
|
112 |
2. 配置API_KEY
|
113 |
|
114 |
-
在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
115 |
|
116 |
-
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解该读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。 」
|
117 |
|
118 |
-
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py`。 」
|
119 |
|
120 |
|
121 |
3. 安装依赖
|
122 |
-
```sh
|
123 |
-
# (选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11)备注:使用官方pip源或者阿里pip源, 临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
124 |
-
python -m pip install -r requirements.txt
|
125 |
|
126 |
-
# (选择II: 使用Anaconda)步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
|
127 |
-
conda create -n gptac_venv python=3.11 # 创建anaconda环境
|
128 |
-
conda activate gptac_venv # 激活anaconda环境
|
129 |
-
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
130 |
-
```
|
131 |
|
132 |
|
133 |
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
134 |
<p>
|
135 |
|
136 |
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
|
|
137 |
```sh
|
138 |
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
139 |
python -m pip install -r request_llms/requirements_chatglm.txt
|
@@ -155,39 +157,39 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
|
|
155 |
|
156 |
|
157 |
4. 运行
|
158 |
-
```sh
|
159 |
-
python main.py
|
160 |
-
```
|
161 |
|
162 |
### 安装方法II:使用Docker
|
163 |
|
164 |
0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐使用这个)
|
165 |
[![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
|
166 |
|
167 |
-
``` sh
|
168 |
-
# 修改docker-compose.yml,保留方案0并删除其他方案。然后运行:
|
169 |
-
docker-compose up
|
170 |
-
```
|
171 |
|
172 |
1. 仅ChatGPT+文心一言+spark等在线模型(推荐大多数人选择)
|
173 |
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
174 |
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
175 |
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
176 |
|
177 |
-
``` sh
|
178 |
-
# 修改docker-compose.yml,保留方案1并删除其他方案。然后运行:
|
179 |
-
docker-compose up
|
180 |
-
```
|
181 |
|
182 |
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
183 |
|
184 |
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
185 |
[![chatglm](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
186 |
|
187 |
-
``` sh
|
188 |
-
# 修改docker-compose.yml,保留方案2并删除其他方案。然后运行:
|
189 |
-
docker-compose up
|
190 |
-
```
|
191 |
|
192 |
|
193 |
### 安装方法III:其他部署姿势
|
@@ -208,9 +210,11 @@ docker-compose up
|
|
208 |
|
209 |
# Advanced Usage
|
210 |
### I:自定义新的便捷按钮(学术快捷键)
|
|
|
211 |
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序。(如按钮已存在,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
|
212 |
例如
|
213 |
-
|
|
|
214 |
"超级英译中": {
|
215 |
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
216 |
"Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",
|
@@ -219,6 +223,7 @@ docker-compose up
|
|
219 |
"Suffix": "",
|
220 |
},
|
221 |
```
|
|
|
222 |
<div align="center">
|
223 |
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
224 |
</div>
|
@@ -295,6 +300,7 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
|
295 |
|
296 |
|
297 |
### II:版本:
|
|
|
298 |
- version 3.70(todo): 优化AutoGen插件主题并设计一系列衍生插件
|
299 |
- version 3.60: 引入AutoGen作为新一代插件的基石
|
300 |
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
@@ -315,7 +321,7 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
|
315 |
- version 3.0: 对chatglm和其他小型llm的支持
|
316 |
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
317 |
- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题
|
318 |
-
- version 2.4:
|
319 |
- version 2.3: 增强多线程交互性
|
320 |
- version 2.2: 函数插件支持热重载
|
321 |
- version 2.1: 可折叠式布局
|
@@ -337,6 +343,7 @@ GPT Academic开发者QQ群:`610599535`
|
|
337 |
|
338 |
1. `master` 分支: 主分支,稳定版
|
339 |
2. `frontier` 分支: 开发分支,测试版
|
|
|
340 |
|
341 |
|
342 |
### V:参考与学习
|
|
|
40 |
|
41 |
功能(⭐= 近期新增功能) | 描述
|
42 |
--- | ---
|
43 |
+
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
44 |
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
45 |
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
46 |
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
|
|
104 |
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
105 |
|
106 |
1. 下载项目
|
107 |
+
|
108 |
+
```sh
|
109 |
+
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
110 |
+
cd gpt_academic
|
111 |
+
```
|
112 |
|
113 |
2. 配置API_KEY
|
114 |
|
115 |
+
在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
116 |
|
117 |
+
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解该读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。 」
|
118 |
|
119 |
+
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py`。 」
|
120 |
|
121 |
|
122 |
3. 安装依赖
|
123 |
+
```sh
|
124 |
+
# (选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11)备注:使用官方pip源或者阿里pip源, 临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
125 |
+
python -m pip install -r requirements.txt
|
126 |
|
127 |
+
# (选择II: 使用Anaconda)步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
|
128 |
+
conda create -n gptac_venv python=3.11 # 创建anaconda环境
|
129 |
+
conda activate gptac_venv # 激活anaconda环境
|
130 |
+
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
131 |
+
```
|
132 |
|
133 |
|
134 |
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
135 |
<p>
|
136 |
|
137 |
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
138 |
+
|
139 |
```sh
|
140 |
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
141 |
python -m pip install -r request_llms/requirements_chatglm.txt
|
|
|
157 |
|
158 |
|
159 |
4. 运行
|
160 |
+
```sh
|
161 |
+
python main.py
|
162 |
+
```
|
163 |
|
164 |
### 安装方法II:使用Docker
|
165 |
|
166 |
0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐使用这个)
|
167 |
[![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
|
168 |
|
169 |
+
``` sh
|
170 |
+
# 修改docker-compose.yml,保留方案0并删除其他方案。然后运行:
|
171 |
+
docker-compose up
|
172 |
+
```
|
173 |
|
174 |
1. 仅ChatGPT+文心一言+spark等在线模型(推荐大多数人选择)
|
175 |
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
176 |
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
177 |
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
178 |
|
179 |
+
``` sh
|
180 |
+
# 修改docker-compose.yml,保留方案1并删除其他方案。然后运行:
|
181 |
+
docker-compose up
|
182 |
+
```
|
183 |
|
184 |
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
185 |
|
186 |
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
187 |
[![chatglm](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
188 |
|
189 |
+
``` sh
|
190 |
+
# 修改docker-compose.yml,保留方案2并删除其他方案。然后运行:
|
191 |
+
docker-compose up
|
192 |
+
```
|
193 |
|
194 |
|
195 |
### 安装方法III:其他部署姿势
|
|
|
210 |
|
211 |
# Advanced Usage
|
212 |
### I:自定义新的便捷按钮(学术快捷键)
|
213 |
+
|
214 |
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序。(如按钮已存在,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
|
215 |
例如
|
216 |
+
|
217 |
+
```python
|
218 |
"超级英译中": {
|
219 |
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
220 |
"Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",
|
|
|
223 |
"Suffix": "",
|
224 |
},
|
225 |
```
|
226 |
+
|
227 |
<div align="center">
|
228 |
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
229 |
</div>
|
|
|
300 |
|
301 |
|
302 |
### II:版本:
|
303 |
+
|
304 |
- version 3.70(todo): 优化AutoGen插件主题并设计一系列衍生插件
|
305 |
- version 3.60: 引入AutoGen作为新一代插件的基石
|
306 |
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
|
|
321 |
- version 3.0: 对chatglm和其他小型llm的支持
|
322 |
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
323 |
- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题
|
324 |
+
- version 2.4: 新增PDF全文翻译功能; 新增输入区切换位置的功能
|
325 |
- version 2.3: 增强多线程交互性
|
326 |
- version 2.2: 函数插件支持热重载
|
327 |
- version 2.1: 可折叠式布局
|
|
|
343 |
|
344 |
1. `master` 分支: 主分支,稳定版
|
345 |
2. `frontier` 分支: 开发分支,测试版
|
346 |
+
3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
|
347 |
|
348 |
|
349 |
### V:参考与学习
|
check_proxy.py
CHANGED
@@ -5,7 +5,6 @@ def check_proxy(proxies):
|
|
5 |
try:
|
6 |
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
|
7 |
data = response.json()
|
8 |
-
# print(f'查询代理的地理位置,返回的结果是{data}')
|
9 |
if 'country_name' in data:
|
10 |
country = data['country_name']
|
11 |
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
@@ -47,8 +46,8 @@ def backup_and_download(current_version, remote_version):
|
|
47 |
os.makedirs(new_version_dir)
|
48 |
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
|
49 |
proxies = get_conf('proxies')
|
50 |
-
r = requests.get(
|
51 |
-
|
52 |
zip_file_path = backup_dir+'/master.zip'
|
53 |
with open(zip_file_path, 'wb+') as f:
|
54 |
f.write(r.content)
|
@@ -111,11 +110,10 @@ def auto_update(raise_error=False):
|
|
111 |
try:
|
112 |
from toolbox import get_conf
|
113 |
import requests
|
114 |
-
import time
|
115 |
import json
|
116 |
proxies = get_conf('proxies')
|
117 |
-
response = requests.get(
|
118 |
-
|
119 |
remote_json_data = json.loads(response.text)
|
120 |
remote_version = remote_json_data['version']
|
121 |
if remote_json_data["show_feature"]:
|
@@ -127,8 +125,7 @@ def auto_update(raise_error=False):
|
|
127 |
current_version = json.loads(current_version)['version']
|
128 |
if (remote_version - current_version) >= 0.01-1e-5:
|
129 |
from colorful import print亮黄
|
130 |
-
print亮黄(
|
131 |
-
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
132 |
print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
|
133 |
user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
|
134 |
if user_instruction in ['Y', 'y']:
|
@@ -154,7 +151,7 @@ def auto_update(raise_error=False):
|
|
154 |
print(msg)
|
155 |
|
156 |
def warm_up_modules():
|
157 |
-
print('
|
158 |
from toolbox import ProxyNetworkActivate
|
159 |
from request_llms.bridge_all import model_info
|
160 |
with ProxyNetworkActivate("Warmup_Modules"):
|
|
|
5 |
try:
|
6 |
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
|
7 |
data = response.json()
|
|
|
8 |
if 'country_name' in data:
|
9 |
country = data['country_name']
|
10 |
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
|
|
46 |
os.makedirs(new_version_dir)
|
47 |
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
|
48 |
proxies = get_conf('proxies')
|
49 |
+
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
50 |
+
except: r = requests.get('https://public.gpt-academic.top/publish/master.zip', proxies=proxies, stream=True)
|
51 |
zip_file_path = backup_dir+'/master.zip'
|
52 |
with open(zip_file_path, 'wb+') as f:
|
53 |
f.write(r.content)
|
|
|
110 |
try:
|
111 |
from toolbox import get_conf
|
112 |
import requests
|
|
|
113 |
import json
|
114 |
proxies = get_conf('proxies')
|
115 |
+
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
116 |
+
except: response = requests.get("https://public.gpt-academic.top/publish/version", proxies=proxies, timeout=5)
|
117 |
remote_json_data = json.loads(response.text)
|
118 |
remote_version = remote_json_data['version']
|
119 |
if remote_json_data["show_feature"]:
|
|
|
125 |
current_version = json.loads(current_version)['version']
|
126 |
if (remote_version - current_version) >= 0.01-1e-5:
|
127 |
from colorful import print亮黄
|
128 |
+
print亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
|
|
129 |
print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
|
130 |
user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
|
131 |
if user_instruction in ['Y', 'y']:
|
|
|
151 |
print(msg)
|
152 |
|
153 |
def warm_up_modules():
|
154 |
+
print('正在执行一些模块的预热 ...')
|
155 |
from toolbox import ProxyNetworkActivate
|
156 |
from request_llms.bridge_all import model_info
|
157 |
with ProxyNetworkActivate("Warmup_Modules"):
|
config.py
CHANGED
@@ -94,12 +94,12 @@ DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
|
94 |
|
95 |
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
96 |
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
97 |
-
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview",
|
98 |
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
99 |
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
100 |
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
101 |
-
"chatglm3", "moss", "
|
102 |
-
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
103 |
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
104 |
|
105 |
|
@@ -278,11 +278,31 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|
278 |
│ ├── BAIDU_CLOUD_API_KEY
|
279 |
│ └── BAIDU_CLOUD_SECRET_KEY
|
280 |
│
|
281 |
-
├── "
|
|
|
|
|
|
|
|
|
282 |
├── NEWBING_STYLE
|
283 |
└── NEWBING_COOKIES
|
284 |
|
285 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
286 |
用户图形界面布局依赖关系示意图
|
287 |
│
|
288 |
├── CHATBOT_HEIGHT 对话窗的高度
|
@@ -293,7 +313,7 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|
293 |
├── THEME 色彩主题
|
294 |
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
|
295 |
├── ADD_WAIFU 加一个live2d装饰
|
296 |
-
|
297 |
|
298 |
|
299 |
插件在线服务配置依赖关系示意图
|
@@ -305,7 +325,7 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|
305 |
│ ├── ALIYUN_ACCESSKEY
|
306 |
│ └── ALIYUN_SECRET
|
307 |
│
|
308 |
-
|
309 |
-
|
310 |
|
311 |
"""
|
|
|
94 |
|
95 |
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
96 |
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
97 |
+
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
98 |
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
99 |
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
100 |
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
101 |
+
"chatglm3", "moss", "claude-2"]
|
102 |
+
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
103 |
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
104 |
|
105 |
|
|
|
278 |
│ ├── BAIDU_CLOUD_API_KEY
|
279 |
│ └── BAIDU_CLOUD_SECRET_KEY
|
280 |
│
|
281 |
+
├── "zhipuai" 智谱AI大模型chatglm_turbo
|
282 |
+
│ ├── ZHIPUAI_API_KEY
|
283 |
+
│ └── ZHIPUAI_MODEL
|
284 |
+
│
|
285 |
+
└── "newbing" Newbing接口不再稳定,不推荐使用
|
286 |
├── NEWBING_STYLE
|
287 |
└── NEWBING_COOKIES
|
288 |
|
289 |
|
290 |
+
本地大模型示意图
|
291 |
+
│
|
292 |
+
├── "chatglm3"
|
293 |
+
├── "chatglm"
|
294 |
+
├── "chatglm_onnx"
|
295 |
+
├── "chatglmft"
|
296 |
+
├── "internlm"
|
297 |
+
├── "moss"
|
298 |
+
├── "jittorllms_pangualpha"
|
299 |
+
├── "jittorllms_llama"
|
300 |
+
├── "deepseekcoder"
|
301 |
+
├── "qwen"
|
302 |
+
├── RWKV的支持见Wiki
|
303 |
+
└── "llama2"
|
304 |
+
|
305 |
+
|
306 |
用户图形界面布局依赖关系示意图
|
307 |
│
|
308 |
├── CHATBOT_HEIGHT 对话窗的高度
|
|
|
313 |
├── THEME 色彩主题
|
314 |
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
|
315 |
├── ADD_WAIFU 加一个live2d装饰
|
316 |
+
└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
|
317 |
|
318 |
|
319 |
插件在线服务配置依赖关系示意图
|
|
|
325 |
│ ├── ALIYUN_ACCESSKEY
|
326 |
│ └── ALIYUN_SECRET
|
327 |
│
|
328 |
+
└── PDF文档精准解析
|
329 |
+
└── GROBID_URLS
|
330 |
|
331 |
"""
|
crazy_functions/Latex输出PDF结果.py
CHANGED
@@ -73,6 +73,7 @@ def move_project(project_folder, arxiv_id=None):
|
|
73 |
|
74 |
# align subfolder if there is a folder wrapper
|
75 |
items = glob.glob(pj(project_folder,'*'))
|
|
|
76 |
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
|
77 |
if os.path.isdir(items[0]): project_folder = items[0]
|
78 |
|
@@ -214,7 +215,6 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
|
|
214 |
# <-------------- we are done ------------->
|
215 |
return success
|
216 |
|
217 |
-
|
218 |
# ========================================= 插件主程序2 =====================================================
|
219 |
|
220 |
@CatchException
|
|
|
73 |
|
74 |
# align subfolder if there is a folder wrapper
|
75 |
items = glob.glob(pj(project_folder,'*'))
|
76 |
+
items = [item for item in items if os.path.basename(item)!='__MACOSX']
|
77 |
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
|
78 |
if os.path.isdir(items[0]): project_folder = items[0]
|
79 |
|
|
|
215 |
# <-------------- we are done ------------->
|
216 |
return success
|
217 |
|
|
|
218 |
# ========================================= 插件主程序2 =====================================================
|
219 |
|
220 |
@CatchException
|
crazy_functions/crazy_utils.py
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
from toolbox import update_ui, get_conf, trimmed_format_exc,
|
2 |
import threading
|
3 |
import os
|
4 |
import logging
|
@@ -92,7 +92,7 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
|
92 |
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
93 |
from toolbox import get_reduce_token_percent
|
94 |
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
95 |
-
MAX_TOKEN =
|
96 |
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
97 |
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
98 |
mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
@@ -224,7 +224,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|
224 |
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
225 |
from toolbox import get_reduce_token_percent
|
226 |
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
227 |
-
MAX_TOKEN =
|
228 |
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
229 |
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
230 |
gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
|
|
1 |
+
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token
|
2 |
import threading
|
3 |
import os
|
4 |
import logging
|
|
|
92 |
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
93 |
from toolbox import get_reduce_token_percent
|
94 |
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
95 |
+
MAX_TOKEN = get_max_token(llm_kwargs)
|
96 |
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
97 |
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
98 |
mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
|
|
224 |
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
225 |
from toolbox import get_reduce_token_percent
|
226 |
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
227 |
+
MAX_TOKEN = get_max_token(llm_kwargs)
|
228 |
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
229 |
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
230 |
gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
crazy_functions/latex_fns/latex_toolbox.py
CHANGED
@@ -283,10 +283,10 @@ def find_tex_file_ignore_case(fp):
|
|
283 |
dir_name = os.path.dirname(fp)
|
284 |
base_name = os.path.basename(fp)
|
285 |
# 如果输入的文件路径是正确的
|
286 |
-
if os.path.
|
287 |
# 如果不正确,试着加上.tex后缀试试
|
288 |
if not base_name.endswith('.tex'): base_name+='.tex'
|
289 |
-
if os.path.
|
290 |
# 如果还找不到,解除大小写限制,再试一次
|
291 |
import glob
|
292 |
for f in glob.glob(dir_name+'/*.tex'):
|
|
|
283 |
dir_name = os.path.dirname(fp)
|
284 |
base_name = os.path.basename(fp)
|
285 |
# 如果输入的文件路径是正确的
|
286 |
+
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
287 |
# 如果不正确,试着加上.tex后缀试试
|
288 |
if not base_name.endswith('.tex'): base_name+='.tex'
|
289 |
+
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
290 |
# 如果还找不到,解除大小写限制,再试一次
|
291 |
import glob
|
292 |
for f in glob.glob(dir_name+'/*.tex'):
|
docs/waifu_plugin/waifu-tips.js
CHANGED
@@ -258,39 +258,7 @@ function loadTipsMessage(result) {
|
|
258 |
});
|
259 |
|
260 |
window.showWelcomeMessage = function(result) {
|
261 |
-
|
262 |
-
if (window.location.href == live2d_settings.homePageUrl) {
|
263 |
-
var now = (new Date()).getHours();
|
264 |
-
if (now > 23 || now <= 5) text = getRandText(result.waifu.hour_tips['t23-5']);
|
265 |
-
else if (now > 5 && now <= 7) text = getRandText(result.waifu.hour_tips['t5-7']);
|
266 |
-
else if (now > 7 && now <= 11) text = getRandText(result.waifu.hour_tips['t7-11']);
|
267 |
-
else if (now > 11 && now <= 14) text = getRandText(result.waifu.hour_tips['t11-14']);
|
268 |
-
else if (now > 14 && now <= 17) text = getRandText(result.waifu.hour_tips['t14-17']);
|
269 |
-
else if (now > 17 && now <= 19) text = getRandText(result.waifu.hour_tips['t17-19']);
|
270 |
-
else if (now > 19 && now <= 21) text = getRandText(result.waifu.hour_tips['t19-21']);
|
271 |
-
else if (now > 21 && now <= 23) text = getRandText(result.waifu.hour_tips['t21-23']);
|
272 |
-
else text = getRandText(result.waifu.hour_tips.default);
|
273 |
-
} else {
|
274 |
-
var referrer_message = result.waifu.referrer_message;
|
275 |
-
if (document.referrer !== '') {
|
276 |
-
var referrer = document.createElement('a');
|
277 |
-
referrer.href = document.referrer;
|
278 |
-
var domain = referrer.hostname.split('.')[1];
|
279 |
-
if (window.location.hostname == referrer.hostname)
|
280 |
-
text = referrer_message.localhost[0] + document.title.split(referrer_message.localhost[2])[0] + referrer_message.localhost[1];
|
281 |
-
else if (domain == 'baidu')
|
282 |
-
text = referrer_message.baidu[0] + referrer.search.split('&wd=')[1].split('&')[0] + referrer_message.baidu[1];
|
283 |
-
else if (domain == 'so')
|
284 |
-
text = referrer_message.so[0] + referrer.search.split('&q=')[1].split('&')[0] + referrer_message.so[1];
|
285 |
-
else if (domain == 'google')
|
286 |
-
text = referrer_message.google[0] + document.title.split(referrer_message.google[2])[0] + referrer_message.google[1];
|
287 |
-
else {
|
288 |
-
$.each(result.waifu.referrer_hostname, function(i,val) {if (i==referrer.hostname) referrer.hostname = getRandText(val)});
|
289 |
-
text = referrer_message.default[0] + referrer.hostname + referrer_message.default[1];
|
290 |
-
}
|
291 |
-
} else text = referrer_message.none[0] + document.title.split(referrer_message.none[2])[0] + referrer_message.none[1];
|
292 |
-
}
|
293 |
-
showMessage(text, 6000);
|
294 |
}; if (live2d_settings.showWelcomeMessage) showWelcomeMessage(result);
|
295 |
|
296 |
var waifu_tips = result.waifu;
|
|
|
258 |
});
|
259 |
|
260 |
window.showWelcomeMessage = function(result) {
|
261 |
+
showMessage('欢迎使用GPT-Academic', 6000);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
262 |
}; if (live2d_settings.showWelcomeMessage) showWelcomeMessage(result);
|
263 |
|
264 |
var waifu_tips = result.waifu;
|
docs/waifu_plugin/waifu-tips.json
CHANGED
@@ -83,8 +83,8 @@
|
|
83 |
"很多强大的函数插件隐藏在下拉菜单中呢。",
|
84 |
"红色的插件,使用之前需要把文件上传进去哦。",
|
85 |
"想添加功能按钮吗?读读readme很容易就学会啦。",
|
86 |
-
"敏感或机密的信息,不可以问
|
87 |
-
"
|
88 |
] }
|
89 |
],
|
90 |
"click": [
|
@@ -92,8 +92,6 @@
|
|
92 |
"selector": ".waifu #live2d",
|
93 |
"text": [
|
94 |
"是…是不小心碰到了吧",
|
95 |
-
"萝莉控是什么呀",
|
96 |
-
"你看到我的小熊了吗",
|
97 |
"再摸的话我可要报警了!⌇●﹏●⌇",
|
98 |
"110吗,这里有个变态一直在摸我(ó﹏ò。)"
|
99 |
]
|
|
|
83 |
"很多强大的函数插件隐藏在下拉菜单中呢。",
|
84 |
"红色的插件,使用之前需要把文件上传进去哦。",
|
85 |
"想添加功能按钮吗?读读readme很容易就学会啦。",
|
86 |
+
"敏感或机密的信息,不可以问AI的哦!",
|
87 |
+
"LLM究竟是划时代的创新,还是扼杀创造力的毒药呢?"
|
88 |
] }
|
89 |
],
|
90 |
"click": [
|
|
|
92 |
"selector": ".waifu #live2d",
|
93 |
"text": [
|
94 |
"是…是不小心碰到了吧",
|
|
|
|
|
95 |
"再摸的话我可要报警了!⌇●﹏●⌇",
|
96 |
"110吗,这里有个变态一直在摸我(ó﹏ò。)"
|
97 |
]
|
flagged/modeling_moss.py
ADDED
The diff for this file is too large to render.
See raw diff
|
|
request_llms/README.md
CHANGED
@@ -1,79 +1,35 @@
|
|
1 |
-
|
2 |
|
3 |
-
## ChatGLM
|
4 |
|
5 |
-
|
6 |
-
- 修改配置,在config.py中将LLM_MODEL的值改为"chatglm"
|
7 |
|
8 |
-
|
9 |
-
LLM_MODEL = "chatglm"
|
10 |
-
```
|
11 |
-
- 运行!
|
12 |
-
``` sh
|
13 |
-
`python main.py`
|
14 |
-
```
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
-
- 1、SLACK_CLAUDE_BOT_ID
|
20 |
-
- 2、SLACK_CLAUDE_USER_TOKEN
|
21 |
|
22 |
-
|
|
|
|
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
-
- 把cookie(json)加入config.py (NEWBING_COOKIES)
|
28 |
|
29 |
-
## Moss
|
30 |
-
- 使用docker-compose
|
31 |
|
32 |
-
|
33 |
-
- 使用docker-compose
|
34 |
|
35 |
-
|
36 |
-
- 使用docker-compose
|
37 |
|
38 |
-
|
39 |
-
- 使用docker-compose
|
40 |
|
|
|
41 |
|
42 |
-
|
43 |
-
|
|
|
44 |
|
45 |
-
|
46 |
-
``` sh
|
47 |
-
# 1 下载模型
|
48 |
-
git clone https://github.com/oobabooga/text-generation-webui.git
|
49 |
-
# 2 这个仓库的最新代码有问题,回滚到几周之前
|
50 |
-
git reset --hard fcda3f87767e642d1c0411776e549e1d3894843d
|
51 |
-
# 3 切换路径
|
52 |
-
cd text-generation-webui
|
53 |
-
# 4 安装text-generation的额外依赖
|
54 |
-
pip install accelerate bitsandbytes flexgen gradio llamacpp markdown numpy peft requests rwkv safetensors sentencepiece tqdm datasets git+https://github.com/huggingface/transformers
|
55 |
-
# 5 下载模型
|
56 |
-
python download-model.py facebook/galactica-1.3b
|
57 |
-
# 其他可选如 facebook/opt-1.3b
|
58 |
-
# facebook/galactica-1.3b
|
59 |
-
# facebook/galactica-6.7b
|
60 |
-
# facebook/galactica-120b
|
61 |
-
# facebook/pygmalion-1.3b 等
|
62 |
-
# 详情见 https://github.com/oobabooga/text-generation-webui
|
63 |
|
64 |
-
|
65 |
-
python server.py --cpu --listen --listen-port 7865 --model facebook_galactica-1.3b
|
66 |
-
```
|
67 |
-
|
68 |
-
### 2. 修改config.py
|
69 |
-
|
70 |
-
``` sh
|
71 |
-
# LLM_MODEL格式: tgui:[模型]@[ws地址]:[ws端口] , 端口要和上面给定的端口一致
|
72 |
-
LLM_MODEL = "tgui:galactica-1.3b@localhost:7860"
|
73 |
-
```
|
74 |
-
|
75 |
-
### 3. 运行!
|
76 |
-
``` sh
|
77 |
-
cd chatgpt-academic
|
78 |
-
python main.py
|
79 |
-
```
|
|
|
1 |
+
P.S. 如果您按照以下步骤成功接入了新的大模型,欢迎发Pull Requests(如果您在自己接入新模型的过程中遇到困难,欢迎加README底部QQ群联系群主)
|
2 |
|
|
|
3 |
|
4 |
+
# 如何接入其他本地大语言模型
|
|
|
5 |
|
6 |
+
1. 复制`request_llms/bridge_llama2.py`,重命名为你喜欢的名字
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
+
2. 修改`load_model_and_tokenizer`方法,加载你的模型和分词器(去该模型官网找demo,复制粘贴即可)
|
9 |
|
10 |
+
3. 修改`llm_stream_generator`方法,定义推理模型(去该模型官网找demo,复制粘贴即可)
|
|
|
|
|
11 |
|
12 |
+
4. 命令行测试
|
13 |
+
- 修改`tests/test_llms.py`(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
14 |
+
- 运行`python tests/test_llms.py`
|
15 |
|
16 |
+
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
17 |
|
18 |
+
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果
|
|
|
19 |
|
|
|
|
|
20 |
|
21 |
+
# 如何接入其他在线大语言模型
|
|
|
22 |
|
23 |
+
1. 复制`request_llms/bridge_zhipu.py`,重命名为你喜欢的名字
|
|
|
24 |
|
25 |
+
2. 修改`predict_no_ui_long_connection`
|
|
|
26 |
|
27 |
+
3. 修改`predict`
|
28 |
|
29 |
+
4. 命令行测试
|
30 |
+
- 修改`tests/test_llms.py`(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
31 |
+
- 运行`python tests/test_llms.py`
|
32 |
|
33 |
+
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
+
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
request_llms/bridge_all.py
CHANGED
@@ -543,6 +543,22 @@ if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai
|
|
543 |
})
|
544 |
except:
|
545 |
print(trimmed_format_exc())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
546 |
|
547 |
# <-- 用于定义和切换多个azure模型 -->
|
548 |
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
|
|
543 |
})
|
544 |
except:
|
545 |
print(trimmed_format_exc())
|
546 |
+
if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
|
547 |
+
try:
|
548 |
+
from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui
|
549 |
+
from .bridge_deepseekcoder import predict as deepseekcoder_ui
|
550 |
+
model_info.update({
|
551 |
+
"deepseekcoder": {
|
552 |
+
"fn_with_ui": deepseekcoder_ui,
|
553 |
+
"fn_without_ui": deepseekcoder_noui,
|
554 |
+
"endpoint": None,
|
555 |
+
"max_token": 4096,
|
556 |
+
"tokenizer": tokenizer_gpt35,
|
557 |
+
"token_cnt": get_token_num_gpt35,
|
558 |
+
}
|
559 |
+
})
|
560 |
+
except:
|
561 |
+
print(trimmed_format_exc())
|
562 |
|
563 |
# <-- 用于定义和切换多个azure模型 -->
|
564 |
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
request_llms/bridge_deepseekcoder.py
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model_name = "deepseek-coder-6.7b-instruct"
|
2 |
+
cmd_to_install = "未知" # "`pip install -r request_llms/requirements_qwen.txt`"
|
3 |
+
|
4 |
+
import os
|
5 |
+
from toolbox import ProxyNetworkActivate
|
6 |
+
from toolbox import get_conf
|
7 |
+
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
8 |
+
from threading import Thread
|
9 |
+
|
10 |
+
def download_huggingface_model(model_name, max_retry, local_dir):
|
11 |
+
from huggingface_hub import snapshot_download
|
12 |
+
for i in range(1, max_retry):
|
13 |
+
try:
|
14 |
+
snapshot_download(repo_id=model_name, local_dir=local_dir, resume_download=True)
|
15 |
+
break
|
16 |
+
except Exception as e:
|
17 |
+
print(f'\n\n下载失败,重试第{i}次中...\n\n')
|
18 |
+
return local_dir
|
19 |
+
# ------------------------------------------------------------------------------------------------------------------------
|
20 |
+
# 🔌💻 Local Model
|
21 |
+
# ------------------------------------------------------------------------------------------------------------------------
|
22 |
+
class GetCoderLMHandle(LocalLLMHandle):
|
23 |
+
|
24 |
+
def load_model_info(self):
|
25 |
+
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
26 |
+
self.model_name = model_name
|
27 |
+
self.cmd_to_install = cmd_to_install
|
28 |
+
|
29 |
+
def load_model_and_tokenizer(self):
|
30 |
+
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
31 |
+
with ProxyNetworkActivate('Download_LLM'):
|
32 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer
|
33 |
+
model_name = "deepseek-ai/deepseek-coder-6.7b-instruct"
|
34 |
+
# local_dir = f"~/.cache/{model_name}"
|
35 |
+
# if not os.path.exists(local_dir):
|
36 |
+
# tokenizer = download_huggingface_model(model_name, max_retry=128, local_dir=local_dir)
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
38 |
+
self._streamer = TextIteratorStreamer(tokenizer)
|
39 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
|
40 |
+
if get_conf('LOCAL_MODEL_DEVICE') != 'cpu':
|
41 |
+
model = model.cuda()
|
42 |
+
return model, tokenizer
|
43 |
+
|
44 |
+
def llm_stream_generator(self, **kwargs):
|
45 |
+
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
46 |
+
def adaptor(kwargs):
|
47 |
+
query = kwargs['query']
|
48 |
+
max_length = kwargs['max_length']
|
49 |
+
top_p = kwargs['top_p']
|
50 |
+
temperature = kwargs['temperature']
|
51 |
+
history = kwargs['history']
|
52 |
+
return query, max_length, top_p, temperature, history
|
53 |
+
|
54 |
+
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
55 |
+
history.append({ 'role': 'user', 'content': query})
|
56 |
+
messages = history
|
57 |
+
inputs = self._tokenizer.apply_chat_template(messages, return_tensors="pt").to(self._model.device)
|
58 |
+
generation_kwargs = dict(
|
59 |
+
inputs=inputs,
|
60 |
+
max_new_tokens=max_length,
|
61 |
+
do_sample=False,
|
62 |
+
top_p=top_p,
|
63 |
+
streamer = self._streamer,
|
64 |
+
top_k=50,
|
65 |
+
temperature=temperature,
|
66 |
+
num_return_sequences=1,
|
67 |
+
eos_token_id=32021,
|
68 |
+
)
|
69 |
+
thread = Thread(target=self._model.generate, kwargs=generation_kwargs, daemon=True)
|
70 |
+
thread.start()
|
71 |
+
generated_text = ""
|
72 |
+
for new_text in self._streamer:
|
73 |
+
generated_text += new_text
|
74 |
+
# print(generated_text)
|
75 |
+
yield generated_text
|
76 |
+
|
77 |
+
|
78 |
+
def try_to_import_special_deps(self, **kwargs): pass
|
79 |
+
# import something that will raise error if the user does not install requirement_*.txt
|
80 |
+
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
81 |
+
# import importlib
|
82 |
+
# importlib.import_module('modelscope')
|
83 |
+
|
84 |
+
|
85 |
+
# ------------------------------------------------------------------------------------------------------------------------
|
86 |
+
# 🔌💻 GPT-Academic Interface
|
87 |
+
# ------------------------------------------------------------------------------------------------------------------------
|
88 |
+
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetCoderLMHandle, model_name, history_format='chatglm3')
|
request_llms/bridge_llama2.py
CHANGED
@@ -12,7 +12,7 @@ from threading import Thread
|
|
12 |
# ------------------------------------------------------------------------------------------------------------------------
|
13 |
# 🔌💻 Local Model
|
14 |
# ------------------------------------------------------------------------------------------------------------------------
|
15 |
-
class
|
16 |
|
17 |
def load_model_info(self):
|
18 |
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
@@ -87,4 +87,4 @@ class GetONNXGLMHandle(LocalLLMHandle):
|
|
87 |
# ------------------------------------------------------------------------------------------------------------------------
|
88 |
# 🔌💻 GPT-Academic Interface
|
89 |
# ------------------------------------------------------------------------------------------------------------------------
|
90 |
-
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(
|
|
|
12 |
# ------------------------------------------------------------------------------------------------------------------------
|
13 |
# 🔌💻 Local Model
|
14 |
# ------------------------------------------------------------------------------------------------------------------------
|
15 |
+
class GetLlamaHandle(LocalLLMHandle):
|
16 |
|
17 |
def load_model_info(self):
|
18 |
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
|
|
87 |
# ------------------------------------------------------------------------------------------------------------------------
|
88 |
# 🔌💻 GPT-Academic Interface
|
89 |
# ------------------------------------------------------------------------------------------------------------------------
|
90 |
+
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetLlamaHandle, model_name)
|
request_llms/bridge_qwen.py
CHANGED
@@ -15,7 +15,7 @@ from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
|
15 |
# ------------------------------------------------------------------------------------------------------------------------
|
16 |
# 🔌💻 Local Model
|
17 |
# ------------------------------------------------------------------------------------------------------------------------
|
18 |
-
class
|
19 |
|
20 |
def load_model_info(self):
|
21 |
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
@@ -64,4 +64,4 @@ class GetONNXGLMHandle(LocalLLMHandle):
|
|
64 |
# ------------------------------------------------------------------------------------------------------------------------
|
65 |
# 🔌💻 GPT-Academic Interface
|
66 |
# ------------------------------------------------------------------------------------------------------------------------
|
67 |
-
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(
|
|
|
15 |
# ------------------------------------------------------------------------------------------------------------------------
|
16 |
# 🔌💻 Local Model
|
17 |
# ------------------------------------------------------------------------------------------------------------------------
|
18 |
+
class GetQwenLMHandle(LocalLLMHandle):
|
19 |
|
20 |
def load_model_info(self):
|
21 |
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
|
|
64 |
# ------------------------------------------------------------------------------------------------------------------------
|
65 |
# 🔌💻 GPT-Academic Interface
|
66 |
# ------------------------------------------------------------------------------------------------------------------------
|
67 |
+
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
|
request_llms/bridge_zhipu.py
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
|
2 |
import time
|
3 |
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
|
|
4 |
|
5 |
model_name = '智谱AI大模型'
|
6 |
|
@@ -37,6 +38,14 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|
37 |
chatbot.append((inputs, ""))
|
38 |
yield from update_ui(chatbot=chatbot, history=history)
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
if validate_key() is False:
|
41 |
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
|
42 |
return
|
|
|
1 |
|
2 |
import time
|
3 |
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
4 |
+
from toolbox import check_packages, report_exception
|
5 |
|
6 |
model_name = '智谱AI大模型'
|
7 |
|
|
|
38 |
chatbot.append((inputs, ""))
|
39 |
yield from update_ui(chatbot=chatbot, history=history)
|
40 |
|
41 |
+
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
42 |
+
try:
|
43 |
+
check_packages(["zhipuai"])
|
44 |
+
except:
|
45 |
+
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
46 |
+
chatbot=chatbot, history=history, delay=0)
|
47 |
+
return
|
48 |
+
|
49 |
if validate_key() is False:
|
50 |
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
|
51 |
return
|
request_llms/local_llm_class.py
CHANGED
@@ -198,7 +198,7 @@ class LocalLLMHandle(Process):
|
|
198 |
if res.startswith(self.std_tag):
|
199 |
new_output = res[len(self.std_tag):]
|
200 |
std_out = std_out[:std_out_clip_len]
|
201 |
-
|
202 |
std_out = new_output + std_out
|
203 |
yield self.std_tag + '\n```\n' + std_out + '\n```\n'
|
204 |
elif res == '[Finish]':
|
|
|
198 |
if res.startswith(self.std_tag):
|
199 |
new_output = res[len(self.std_tag):]
|
200 |
std_out = std_out[:std_out_clip_len]
|
201 |
+
print(new_output, end='')
|
202 |
std_out = new_output + std_out
|
203 |
yield self.std_tag + '\n```\n' + std_out + '\n```\n'
|
204 |
elif res == '[Finish]':
|
tests/test_llms.py
CHANGED
@@ -15,7 +15,8 @@ if __name__ == "__main__":
|
|
15 |
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
|
16 |
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
|
17 |
# from request_llms.bridge_claude import predict_no_ui_long_connection
|
18 |
-
from request_llms.bridge_internlm import predict_no_ui_long_connection
|
|
|
19 |
# from request_llms.bridge_qwen import predict_no_ui_long_connection
|
20 |
# from request_llms.bridge_spark import predict_no_ui_long_connection
|
21 |
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
|
|
|
15 |
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
|
16 |
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
|
17 |
# from request_llms.bridge_claude import predict_no_ui_long_connection
|
18 |
+
# from request_llms.bridge_internlm import predict_no_ui_long_connection
|
19 |
+
from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
|
20 |
# from request_llms.bridge_qwen import predict_no_ui_long_connection
|
21 |
# from request_llms.bridge_spark import predict_no_ui_long_connection
|
22 |
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
|
themes/contrast.py
CHANGED
@@ -1,6 +1,8 @@
|
|
|
|
1 |
import gradio as gr
|
2 |
from toolbox import get_conf
|
3 |
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
|
|
4 |
|
5 |
def adjust_theme():
|
6 |
|
@@ -57,7 +59,7 @@ def adjust_theme():
|
|
57 |
button_cancel_text_color_dark="white",
|
58 |
)
|
59 |
|
60 |
-
with open('
|
61 |
js = f"<script>{f.read()}</script>"
|
62 |
|
63 |
# 添加一个萌萌的看板娘
|
@@ -67,7 +69,9 @@ def adjust_theme():
|
|
67 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
68 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
69 |
"""
|
70 |
-
|
|
|
|
|
71 |
def gradio_new_template_fn(*args, **kwargs):
|
72 |
res = gradio_original_template_fn(*args, **kwargs)
|
73 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
@@ -79,7 +83,7 @@ def adjust_theme():
|
|
79 |
print('gradio版本较旧, 不能自定义字体和颜色')
|
80 |
return set_theme
|
81 |
|
82 |
-
with open(
|
83 |
advanced_css = f.read()
|
84 |
-
with open(
|
85 |
advanced_css += f.read()
|
|
|
1 |
+
import os
|
2 |
import gradio as gr
|
3 |
from toolbox import get_conf
|
4 |
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
5 |
+
theme_dir = os.path.dirname(__file__)
|
6 |
|
7 |
def adjust_theme():
|
8 |
|
|
|
59 |
button_cancel_text_color_dark="white",
|
60 |
)
|
61 |
|
62 |
+
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
63 |
js = f"<script>{f.read()}</script>"
|
64 |
|
65 |
# 添加一个萌萌的看板娘
|
|
|
69 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
70 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
71 |
"""
|
72 |
+
if not hasattr(gr, 'RawTemplateResponse'):
|
73 |
+
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
74 |
+
gradio_original_template_fn = gr.RawTemplateResponse
|
75 |
def gradio_new_template_fn(*args, **kwargs):
|
76 |
res = gradio_original_template_fn(*args, **kwargs)
|
77 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
|
|
83 |
print('gradio版本较旧, 不能自定义字体和颜色')
|
84 |
return set_theme
|
85 |
|
86 |
+
with open(os.path.join(theme_dir, 'contrast.css'), "r", encoding="utf-8") as f:
|
87 |
advanced_css = f.read()
|
88 |
+
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
89 |
advanced_css += f.read()
|
themes/default.py
CHANGED
@@ -60,7 +60,7 @@ def adjust_theme():
|
|
60 |
|
61 |
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
62 |
js = f"<script>{f.read()}</script>"
|
63 |
-
|
64 |
# 添加一个萌萌的看板娘
|
65 |
if ADD_WAIFU:
|
66 |
js += """
|
@@ -68,7 +68,9 @@ def adjust_theme():
|
|
68 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
69 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
70 |
"""
|
71 |
-
|
|
|
|
|
72 |
def gradio_new_template_fn(*args, **kwargs):
|
73 |
res = gradio_original_template_fn(*args, **kwargs)
|
74 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
|
|
60 |
|
61 |
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
62 |
js = f"<script>{f.read()}</script>"
|
63 |
+
|
64 |
# 添加一个萌萌的看板娘
|
65 |
if ADD_WAIFU:
|
66 |
js += """
|
|
|
68 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
69 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
70 |
"""
|
71 |
+
if not hasattr(gr, 'RawTemplateResponse'):
|
72 |
+
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
73 |
+
gradio_original_template_fn = gr.RawTemplateResponse
|
74 |
def gradio_new_template_fn(*args, **kwargs):
|
75 |
res = gradio_original_template_fn(*args, **kwargs)
|
76 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
themes/gradios.py
CHANGED
@@ -1,7 +1,9 @@
|
|
1 |
-
import gradio as gr
|
2 |
import logging
|
|
|
|
|
3 |
from toolbox import get_conf, ProxyNetworkActivate
|
4 |
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
|
|
5 |
|
6 |
def dynamic_set_theme(THEME):
|
7 |
set_theme = gr.themes.ThemeClass()
|
@@ -13,7 +15,6 @@ def dynamic_set_theme(THEME):
|
|
13 |
return set_theme
|
14 |
|
15 |
def adjust_theme():
|
16 |
-
|
17 |
try:
|
18 |
set_theme = gr.themes.ThemeClass()
|
19 |
with ProxyNetworkActivate('Download_Gradio_Theme'):
|
@@ -23,7 +24,7 @@ def adjust_theme():
|
|
23 |
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
|
24 |
set_theme = set_theme.from_hub(THEME.lower())
|
25 |
|
26 |
-
with open('
|
27 |
js = f"<script>{f.read()}</script>"
|
28 |
|
29 |
# 添加一个萌萌的看板娘
|
@@ -33,7 +34,9 @@ def adjust_theme():
|
|
33 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
34 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
35 |
"""
|
36 |
-
|
|
|
|
|
37 |
def gradio_new_template_fn(*args, **kwargs):
|
38 |
res = gradio_original_template_fn(*args, **kwargs)
|
39 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
@@ -46,7 +49,5 @@ def adjust_theme():
|
|
46 |
logging.error('gradio版本较旧, 不能自定义字体和颜色:', trimmed_format_exc())
|
47 |
return set_theme
|
48 |
|
49 |
-
|
50 |
-
# advanced_css = f.read()
|
51 |
-
with open("themes/common.css", "r", encoding="utf-8") as f:
|
52 |
advanced_css = f.read()
|
|
|
|
|
1 |
import logging
|
2 |
+
import os
|
3 |
+
import gradio as gr
|
4 |
from toolbox import get_conf, ProxyNetworkActivate
|
5 |
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
6 |
+
theme_dir = os.path.dirname(__file__)
|
7 |
|
8 |
def dynamic_set_theme(THEME):
|
9 |
set_theme = gr.themes.ThemeClass()
|
|
|
15 |
return set_theme
|
16 |
|
17 |
def adjust_theme():
|
|
|
18 |
try:
|
19 |
set_theme = gr.themes.ThemeClass()
|
20 |
with ProxyNetworkActivate('Download_Gradio_Theme'):
|
|
|
24 |
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
|
25 |
set_theme = set_theme.from_hub(THEME.lower())
|
26 |
|
27 |
+
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
28 |
js = f"<script>{f.read()}</script>"
|
29 |
|
30 |
# 添加一个萌萌的看板娘
|
|
|
34 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
35 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
36 |
"""
|
37 |
+
if not hasattr(gr, 'RawTemplateResponse'):
|
38 |
+
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
39 |
+
gradio_original_template_fn = gr.RawTemplateResponse
|
40 |
def gradio_new_template_fn(*args, **kwargs):
|
41 |
res = gradio_original_template_fn(*args, **kwargs)
|
42 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
|
|
49 |
logging.error('gradio版本较旧, 不能自定义字体和颜色:', trimmed_format_exc())
|
50 |
return set_theme
|
51 |
|
52 |
+
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
|
|
|
|
53 |
advanced_css = f.read()
|
themes/green.py
CHANGED
@@ -1,6 +1,8 @@
|
|
|
|
1 |
import gradio as gr
|
2 |
from toolbox import get_conf
|
3 |
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
|
|
4 |
|
5 |
def adjust_theme():
|
6 |
try:
|
@@ -73,7 +75,7 @@ def adjust_theme():
|
|
73 |
chatbot_code_background_color_dark="*neutral_950",
|
74 |
)
|
75 |
|
76 |
-
with open('
|
77 |
js = f"<script>{f.read()}</script>"
|
78 |
|
79 |
# 添加一个萌萌的看板娘
|
@@ -83,11 +85,13 @@ def adjust_theme():
|
|
83 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
84 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
85 |
"""
|
86 |
-
|
87 |
-
with open('
|
88 |
js += f"<script>{f.read()}</script>"
|
89 |
-
|
90 |
-
|
|
|
|
|
91 |
def gradio_new_template_fn(*args, **kwargs):
|
92 |
res = gradio_original_template_fn(*args, **kwargs)
|
93 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
@@ -99,8 +103,7 @@ def adjust_theme():
|
|
99 |
print('gradio版本较旧, 不能自定义字体和颜色')
|
100 |
return set_theme
|
101 |
|
102 |
-
|
103 |
-
with open("themes/green.css", "r", encoding="utf-8") as f:
|
104 |
advanced_css = f.read()
|
105 |
-
with open(
|
106 |
advanced_css += f.read()
|
|
|
1 |
+
import os
|
2 |
import gradio as gr
|
3 |
from toolbox import get_conf
|
4 |
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
|
5 |
+
theme_dir = os.path.dirname(__file__)
|
6 |
|
7 |
def adjust_theme():
|
8 |
try:
|
|
|
75 |
chatbot_code_background_color_dark="*neutral_950",
|
76 |
)
|
77 |
|
78 |
+
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
|
79 |
js = f"<script>{f.read()}</script>"
|
80 |
|
81 |
# 添加一个萌萌的看板娘
|
|
|
85 |
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
86 |
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
87 |
"""
|
88 |
+
|
89 |
+
with open(os.path.join(theme_dir, 'green.js'), 'r', encoding='utf8') as f:
|
90 |
js += f"<script>{f.read()}</script>"
|
91 |
+
|
92 |
+
if not hasattr(gr, 'RawTemplateResponse'):
|
93 |
+
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
94 |
+
gradio_original_template_fn = gr.RawTemplateResponse
|
95 |
def gradio_new_template_fn(*args, **kwargs):
|
96 |
res = gradio_original_template_fn(*args, **kwargs)
|
97 |
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
|
|
|
103 |
print('gradio版本较旧, 不能自定义字体和颜色')
|
104 |
return set_theme
|
105 |
|
106 |
+
with open(os.path.join(theme_dir, 'green.css'), "r", encoding="utf-8") as f:
|
|
|
107 |
advanced_css = f.read()
|
108 |
+
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
|
109 |
advanced_css += f.read()
|
version
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"version": 3.
|
3 |
"show_feature": true,
|
4 |
-
"new_feature": "
|
5 |
}
|
|
|
1 |
{
|
2 |
+
"version": 3.61,
|
3 |
"show_feature": true,
|
4 |
+
"new_feature": "修复潜在的多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮"
|
5 |
}
|