Black Pink Killer commited on
Commit
b5ee4a3
1 Parent(s): b6904a2
.gitattributes DELETED
@@ -1,34 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tflite filter=lfs diff=lfs merge=lfs -text
29
- *.tgz filter=lfs diff=lfs merge=lfs -text
30
- *.wasm filter=lfs diff=lfs merge=lfs -text
31
- *.xz filter=lfs diff=lfs merge=lfs -text
32
- *.zip filter=lfs diff=lfs merge=lfs -text
33
- *.zst filter=lfs diff=lfs merge=lfs -text
34
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/ISSUE_TEMPLATE/report-bug.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Report Bug
3
+ about: 报告一个bug,且您确信这是bug而不是您的问题
4
+ title: "[BUG] 简短的错误描述"
5
+ labels: bug
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ > 感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
11
+ > 如果您确信这是一个我们的 bug,而不是因为您的原因部署失败,欢迎提交该issue!如果您不能确定这是bug还是您的问题,请选择其他类型的issue模板。
12
+ > 注意,请编辑issue标题栏“简短的错误描述”部分,也请替换我们的issue模板中的原文。
13
+
14
+ ### 错误描述
15
+ 请简明描述该bug。
16
+
17
+ ### 复现操作
18
+ 你之前干了什么,然后出现了bug呢?例如:
19
+ 1. 正常完成本地部署
20
+ 2. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
21
+ 3. ChatGPT 输出部分内容后程序被自动终止
22
+
23
+ ### 错误截图
24
+ 如果可以,请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
25
+
26
+ ### 终端(控制台)中的错误报告
27
+ 如果可以,请复制终端中的主要错误报告。
28
+
29
+ ```console
30
+ (请使用错误报告替换本行)
31
+ ```
32
+
33
+ ### 运行环境
34
+ **请填写以下列表:**
35
+
36
+ - OS: [e.g. Windows11 22H2]
37
+ - Browser: [e.g. Chrome, safari]
38
+ - Gradio version:
39
+ - Python (或Python3) version:
40
+
41
+ ### 其他
42
+ 补充说明
.github/ISSUE_TEMPLATE/report-docker.md CHANGED
@@ -1,30 +1,33 @@
1
  ---
2
  name: Report Docker
3
- about: 报告使用 Docker 在服务器上部署时的错误
4
  title: "[Docker] 简短的错误描述"
5
  labels: question, docker deployment
6
  assignees: ''
7
 
8
  ---
9
 
 
 
 
10
  ### 错误描述
11
- > 请简明描述该错误。
12
 
13
  ### 复现操作
14
- > 描述出现错误的操作步骤。
15
 
16
  ### 错误截图
17
- > 如果可以,请提供错误的截图,如部署的网页截图与控制台错误报告的截图。
18
 
19
  ### 终端(控制台)中的错误报告
20
- > 如果可以,请复制终端中的主要错误报告。
21
 
22
  ```console
23
  (请使用错误报告替换本行)
24
  ```
25
 
26
  ### 运行环境
27
- > **请填写以下列表:**
28
 
29
  - OS: [e.g. Linux Ubuntu]
30
  - Docker version:
@@ -32,4 +35,4 @@ assignees: ''
32
  - Python (或Python3) version:
33
 
34
  ### 其他
35
- > 补充说明
 
1
  ---
2
  name: Report Docker
3
+ about: 报告使用 Docker 部署时的问题或错误
4
  title: "[Docker] 简短的错误描述"
5
  labels: question, docker deployment
6
  assignees: ''
7
 
8
  ---
9
 
10
+ > 感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
11
+ > 请务必先查看 README 中是否已经对您的问题做出了解答。如果没有,请检索issue,查看有没有相同或类似的问题。如果您确信这是一个前人没有遇到的问题,欢迎提交该issue!注意,请编辑issue标题栏“简短的错误描述”部分,也请替换我们的issue模板中的原文。
12
+
13
  ### 错误描述
14
+ 请简明描述该错误。
15
 
16
  ### 复现操作
17
+ 描述出现错误的操作步骤。
18
 
19
  ### 错误截图
20
+ 如果可以,请提供错误的截图,如部署的网页截图与控制台错误报告的截图。
21
 
22
  ### 终端(控制台)中的错误报告
23
+ 如果可以,请复制终端中的主要错误报告。
24
 
25
  ```console
26
  (请使用错误报告替换本行)
27
  ```
28
 
29
  ### 运行环境
30
+ **请填写以下列表:**
31
 
32
  - OS: [e.g. Linux Ubuntu]
33
  - Docker version:
 
35
  - Python (或Python3) version:
36
 
37
  ### 其他
38
+ 补充说明
.github/ISSUE_TEMPLATE/report-localhost.md CHANGED
@@ -1,29 +1,30 @@
1
  ---
2
  name: Report localhost
3
- about: 报告本地部署时的错误
4
  title: "[本地部署] 简短的错误描述"
5
  labels: question, local deployment
6
  assignees: ''
7
 
8
  ---
9
 
10
- 感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
11
- 如果你不知道怎么填写,请先点击编辑框上方的“Preview”在预览中查看我们的说明,然后在空白行处填写信息~
 
12
 
13
  ### 错误描述
14
- > 请简明描述该错误。另外,请注意替换issue标题中的“简短的错误描述”。
15
 
16
  ### 复现操作
17
- > 你之前干了什么,然后出现了错误呢?例如:
18
- > 1. 正常完成本地部署
19
- > 2. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
20
- > 3. ChatGPT 输出部分内容后程序被自动终止
21
 
22
  ### 错误截图
23
- > 如果可以,请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
24
 
25
  ### 终端(控制台)中的错误报告
26
- > 如果可以,请复制终端中的主要错误报告。
27
 
28
  ```console
29
  (请使用错误报告替换本行)
@@ -31,13 +32,13 @@ assignees: ''
31
 
32
  ### 运行环境
33
  #### 桌面系统
34
- > **请填写以下列表:**
35
 
36
- - OS: [e.g. Windows11 22H2]
37
- - Browser: [e.g. Chrome, safari]
38
 
39
  #### 运行依赖
40
- > **请填写以下列表:**
41
  > 你可以在终端中依次输入以下指令以查看软件版本:
42
  > ```shell
43
  > pip show gradio
@@ -49,4 +50,4 @@ assignees: ''
49
  - Python (或Python3) version:
50
 
51
  ### 其他
52
- > 补充说明
 
1
  ---
2
  name: Report localhost
3
+ about: 报告本地部署时的问题或错误
4
  title: "[本地部署] 简短的错误描述"
5
  labels: question, local deployment
6
  assignees: ''
7
 
8
  ---
9
 
10
+ > 感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
11
+ > 请务必先查看 README 中是否已经对您的问题做出了解答。如果没有,请检索issue,查看有没有相同或类似的问题。如果您确信这是一个前人没有遇到的问题,欢迎提交该issue!注意,请编辑issue标题栏“简短的错误描述”部分,也请替换我们的issue模板中的原文。
12
+ > 如果您想问的是 `Something went wrong Expecting value: line 1 column 1 (char 0)`,请再好好看一遍 README!!
13
 
14
  ### 错误描述
15
+ 请简明描述该错误。另外,请注意替换issue标题中的“简短的错误描述”。
16
 
17
  ### 复现操作
18
+ 你之前干了什么,然后出现了错误呢?例如:
19
+ 1. 正常完成本地部署
20
+ 2. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
21
+ 3. ChatGPT 输出部分内容后程序被自动终止
22
 
23
  ### 错误截图
24
+ 如果可以,请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
25
 
26
  ### 终端(控制台)中的错误报告
27
+ 如果可以,请复制终端中的主要错误报告。
28
 
29
  ```console
30
  (请使用错误报告替换本行)
 
32
 
33
  ### 运行环境
34
  #### 桌面系统
35
+ **请填写以下列表:**
36
 
37
+ - OS: [例如:Windows11 22H2]
38
+ - Browser: [例如:Chrome, Safari]
39
 
40
  #### 运行依赖
41
+ **请填写以下列表:**
42
  > 你可以在终端中依次输入以下指令以查看软件版本:
43
  > ```shell
44
  > pip show gradio
 
50
  - Python (或Python3) version:
51
 
52
  ### 其他
53
+ 补充说明
.github/ISSUE_TEMPLATE/report-others.md CHANGED
@@ -1,36 +1,36 @@
1
  ---
2
  name: Report others
3
- about: 报告其他错误(如 Hugging Face 中的 Space 等)
4
  title: "[其他] 简短的错误描述"
5
  labels: question
6
  assignees: ''
7
 
8
  ---
9
 
10
- 感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
11
- 如果你不知道怎么填写,请先点击编辑框上方的“Preview”在预览中查看我们的说明,然后点击“Write”在空白行处填写信息~
12
 
13
  ### 错误描述
14
- > 请简明描述该错误。
15
 
16
  ### 复现操作
17
- > 你之前干了什么,然后出现了错误呢?例如:
18
- > 1. 正常完成本地部署
19
- > 2. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
20
- > 3. ChatGPT 输出部分内容后程序被自动终止
21
 
22
  ### 错误截图
23
- > 如果可以,请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
24
 
25
  ### 终端(控制台)中的错误报告
26
- > 如果可以,请复制终端中的主要错误报告。
27
 
28
  ```console
29
  (请使用错误报告替换本行)
30
  ```
31
 
32
  ### 运行环境
33
- > **请填写以下列表:**
34
 
35
  - OS: [e.g. Windows11 22H2]
36
  - Browser: [e.g. Chrome, safari]
@@ -38,4 +38,4 @@ assignees: ''
38
  - Python (或Python3) version:
39
 
40
  ### 其他
41
- > 补充说明
 
1
  ---
2
  name: Report others
3
+ about: 报告其他问题(如 Hugging Face 中的 Space 等)
4
  title: "[其他] 简短的错误描述"
5
  labels: question
6
  assignees: ''
7
 
8
  ---
9
 
10
+ > 感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
11
+ > 请务必先查看 README 中是否已经对您的问题做出了解答。如果没有,请检索issue,查看有没有相同或类似的问题。如果您确信这是一个前人没有遇到的问题,欢迎提交该issue!注意,请编辑issue标题栏“简短的错误描述”部分,也请替换我们的issue模板中的原文。
12
 
13
  ### 错误描述
14
+ 请简明描述该错误。另外,请注意替换issue标题中的“简短的错误描述”。
15
 
16
  ### 复现操作
17
+ 你之前干了什么,然后出现了错误呢?例如:
18
+ 1. 正常完成本地部署
19
+ 2. 在对话框中要求 ChatGPT “以LaTeX格式输出三角函数”
20
+ 3. ChatGPT 输出部分内容后程序被自动终止
21
 
22
  ### 错误截图
23
+ 如果可以,请提供错误的截图,如本地部署的网页截图与终端错误报告的截图。
24
 
25
  ### 终端(控制台)中的错误报告
26
+ 如果可以,请复制终端中的主要错误报告。
27
 
28
  ```console
29
  (请使用错误报告替换本行)
30
  ```
31
 
32
  ### 运行环境
33
+ **请填写以下列表:**
34
 
35
  - OS: [e.g. Windows11 22H2]
36
  - Browser: [e.g. Chrome, safari]
 
38
  - Python (或Python3) version:
39
 
40
  ### 其他
41
+ 补充说明
.github/ISSUE_TEMPLATE/report-server.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Report Server
3
+ about: 报告在远程服务器上部署时的问题或错误
4
+ title: "[远程部署] 简短的错误描述"
5
+ labels: question, server deployment
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ > 感谢提交 issue! 请尽可能完整填写以下信息,帮助我们更好地定位问题~
11
+ > 请务必先查看 README 中是否已经对您的问题做出了解答。如果没有,请检索issue,查看有没有相同或类似的问题。如果您确信这是一个前人没有遇到的问题,欢迎提交该issue!注意,请编辑issue标题栏“简短的错误描述”部分,也请替换我们的issue模板中的原文。
12
+
13
+ ### 错误描述
14
+ 请简明描述该错误。
15
+
16
+ ### 复现操作
17
+ 描述出现错误的操作步骤。
18
+
19
+ ### 错误截图
20
+ 如果可以,请提供错误的截图,如部署的网页截图与控制台错误报告的截图。
21
+
22
+ ### 终端(控制台)中的错误报告
23
+ 如果可以,请复制终端中的主要错误报告。
24
+
25
+ ```console
26
+ (请使用错误报告替换本行)
27
+ ```
28
+
29
+ ### 运行环境
30
+ **请填写以下列表:**
31
+
32
+ - OS: [e.g. Linux Ubuntu]
33
+ - Docker version:
34
+ - Gradio version:
35
+ - Python (或Python3) version:
36
+
37
+ ### 其他
38
+ 补充说明
.gitignore CHANGED
@@ -27,6 +27,7 @@ share/python-wheels/
27
  *.egg
28
  MANIFEST
29
  history/
 
30
 
31
  # PyInstaller
32
  # Usually these files are written by a python script from a template
@@ -135,5 +136,3 @@ dmypy.json
135
  api_key.txt
136
 
137
  auth.json
138
- .idea/misc.xml
139
- .idea/workspace.xml
 
27
  *.egg
28
  MANIFEST
29
  history/
30
+ index/
31
 
32
  # PyInstaller
33
  # Usually these files are written by a python script from a template
 
136
  api_key.txt
137
 
138
  auth.json
 
 
.idea/.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Default ignored files
2
+ /shelf/
3
+ /workspace.xml
.idea/.name ADDED
@@ -0,0 +1 @@
 
 
1
+ ChuanhuChatbot.py
.idea/{ChatGPT.iml → chuanhu.iml} RENAMED
File without changes
.idea/inspectionProfiles/Project_Default.xml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <component name="InspectionProjectProfileManager">
2
+ <profile version="1.0">
3
+ <option name="myName" value="Project Default" />
4
+ <inspection_tool class="PyPep8NamingInspection" enabled="true" level="WEAK WARNING" enabled_by_default="true">
5
+ <option name="ignoredErrors">
6
+ <list>
7
+ <option value="N806" />
8
+ <option value="N802" />
9
+ </list>
10
+ </option>
11
+ </inspection_tool>
12
+ </profile>
13
+ </component>
.idea/{vcs.xml → misc.xml} RENAMED
@@ -1,6 +1,4 @@
1
  <?xml version="1.0" encoding="UTF-8"?>
2
  <project version="4">
3
- <component name="VcsDirectoryMappings">
4
- <mapping directory="" vcs="Git" />
5
- </component>
6
  </project>
 
1
  <?xml version="1.0" encoding="UTF-8"?>
2
  <project version="4">
3
+ <component name="ProjectRootManager" version="2" project-jdk-name="Python 3.11 (chuanhu)" project-jdk-type="Python SDK" />
 
 
4
  </project>
.idea/modules.xml CHANGED
@@ -2,7 +2,7 @@
2
  <project version="4">
3
  <component name="ProjectModuleManager">
4
  <modules>
5
- <module fileurl="file://$PROJECT_DIR$/.idea/ChatGPT.iml" filepath="$PROJECT_DIR$/.idea/ChatGPT.iml" />
6
  </modules>
7
  </component>
8
  </project>
 
2
  <project version="4">
3
  <component name="ProjectModuleManager">
4
  <modules>
5
+ <module fileurl="file://$PROJECT_DIR$/.idea/chuanhu.iml" filepath="$PROJECT_DIR$/.idea/chuanhu.iml" />
6
  </modules>
7
  </component>
8
  </project>
ChuanhuChatbot.py CHANGED
@@ -1,19 +1,24 @@
1
  # -*- coding:utf-8 -*-
2
- import gradio as gr
3
  import os
4
  import logging
5
  import sys
6
- import argparse
7
- from utils import *
8
- from gradio import *
9
- from presets import *
10
 
11
- logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s")
12
-
13
- my_api_key = os.environ.get('Key') # 在这里输入你的 API 密钥
14
 
15
- #if we are running in Docker
16
- if os.environ.get('dockerrun') == 'yes':
 
 
 
 
 
 
 
 
 
 
 
 
17
  dockerflag = True
18
  else:
19
  dockerflag = False
@@ -22,10 +27,15 @@ authflag = False
22
 
23
 
24
  def checkPassword(pswdTxt):
25
- if pswdTxt == os.environ.get('Password'):
26
  logging.info(colorama.Back.BLUE + "\n****密码正确!****" + colorama.Style.RESET_ALL)
27
- return {keyTxt: my_api_key, status_display: "😎密码对了! 开搞!😎",
28
- pswd: gr.update(visible=False), logBtn: gr.update(visible=False), balanceTxt: gr.update(visible=True)}
 
 
 
 
 
29
  else:
30
  logging.info(colorama.Back.RED + "\n****密码尝试错误!****" + colorama.Style.RESET_ALL)
31
  return {keyTxt: "", status_display: "🤔"}
@@ -34,23 +44,28 @@ def checkPassword(pswdTxt):
34
  def checkBalance():
35
  url = "https://chat-gpt.aurorax.cloud/dashboard/billing/credit_grants"
36
  res = requests.get(url, headers={
37
- "Authorization": f"Bearer " + my_api_key
38
  }, timeout=60).json()
 
39
  return "$ " + str(round(res['total_available'], 2))
40
 
41
 
42
  if dockerflag:
43
- my_api_key = os.environ.get('my_api_key')
44
  if my_api_key == "empty":
45
  logging.error("Please give a api key!")
46
  sys.exit(1)
47
- #auth
48
- username = os.environ.get('USERNAME')
49
- password = os.environ.get('PASSWORD')
50
  if not (isinstance(username, type(None)) or isinstance(password, type(None))):
51
  authflag = True
52
  else:
53
- if not my_api_key and os.path.exists("api_key.txt") and os.path.getsize("api_key.txt"):
 
 
 
 
54
  with open("api_key.txt", "r") as f:
55
  my_api_key = f.read().strip()
56
  if os.path.exists("auth.json"):
@@ -62,144 +77,411 @@ else:
62
  authflag = True
63
 
64
  gr.Chatbot.postprocess = postprocess
65
-
66
- with gr.Blocks(css=customCSS,) as demo:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  history = gr.State([])
68
  token_count = gr.State([])
69
  promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2))
 
70
  TRUECOMSTANT = gr.State(True)
71
  FALSECONSTANT = gr.State(False)
72
  topic = gr.State("未命名对话历史记录")
73
 
74
- # gr.HTML("""
75
- # <div style="text-align: center; margin-top: 20px;">
76
- # """)
77
- gr.HTML(title)
78
 
79
  with gr.Row(scale=1).style(equal_height=True):
80
-
81
  with gr.Column(scale=5):
82
  with gr.Row(scale=1):
83
- chatbot = gr.Chatbot().style(height=550) # .style(color_map=("#1D51EE", "#585A5B"))
84
  with gr.Row(scale=1):
85
  with gr.Column(scale=12):
86
- user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style(
87
- container=False)
88
- with gr.Column(min_width=50, scale=1):
89
- submitBtn = gr.Button("🚀", variant="primary")
 
90
  with gr.Row(scale=1):
91
- emptyBtn = gr.Button("🧹 新的对话",)
 
 
92
  retryBtn = gr.Button("🔄 重新生成")
93
- delLastBtn = gr.Button("🗑️ 删除最近一条对话")
94
  reduceTokenBtn = gr.Button("♻️ 总结对话")
95
 
96
-
97
-
98
  with gr.Column():
99
- with gr.Column(min_width=50,scale=1):
100
- status_display = gr.Markdown("status: ready")
101
  with gr.Tab(label="ChatGPT"):
102
  pswd = gr.Textbox(show_label=True, placeholder=f"Password to access...", type="password",
103
  visible=not HIDE_MY_KEY, label="密码")
104
- keyTxt = gr.Textbox(show_label=True, placeholder=f"OpenAI API-key...", type="password",
105
- visible=False, label="API-Key")
106
- # pswd.change(checkPassword, inputs=pswd, outputs={keyTxt, status_display})
 
 
 
 
107
  logBtn = gr.Button("🚀检查密码🚀", variant="primary")
108
- model_select_dropdown = gr.Dropdown(label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0])
109
- with gr.Accordion("参数", open=False):
110
- top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True,
111
- label="Top-p (nucleus sampling)",)
112
- temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0,
113
- step=0.1, interactive=True, label="Temperature",)
114
- use_streaming_checkbox = gr.Checkbox(label="实时传输回答", value=True, visible=enable_streaming_option)
115
- use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False)
116
  balanceTxt = gr.Textbox(show_label=True, placeholder=f"Await Checking...", type="text",
117
  visible=False, label="API余额")
118
- balanceBtn = gr.Button("🔄 刷新")
 
 
 
 
 
 
 
 
119
 
120
  with gr.Tab(label="Prompt"):
121
- systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", label="System prompt", value=initial_prompt).style(container=True)
 
 
 
 
 
 
122
  with gr.Accordion(label="加载Prompt模板", open=True):
123
  with gr.Column():
124
  with gr.Row():
125
  with gr.Column(scale=6):
126
- templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件", choices=get_template_names(plain=True), multiselect=False, value=get_template_names(plain=True)[0])
 
 
 
 
 
127
  with gr.Column(scale=1):
128
  templateRefreshBtn = gr.Button("🔄 刷新")
129
  with gr.Row():
130
  with gr.Column():
131
- templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template(get_template_names(plain=True)[0], mode=1), multiselect=False, value=load_template(get_template_names(plain=True)[0], mode=1)[0])
 
 
 
 
 
 
 
 
 
132
 
133
  with gr.Tab(label="保存/加载"):
134
  with gr.Accordion(label="保存/加载对话历史记录", open=True):
135
  with gr.Column():
136
  with gr.Row():
137
  with gr.Column(scale=6):
138
- saveFileName = gr.Textbox(
139
- show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True)
 
 
 
 
140
  with gr.Column(scale=1):
141
- saveHistoryBtn = gr.Button("💾 保存对话")
142
  with gr.Row():
143
  with gr.Column(scale=6):
144
- historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False, value=get_history_names(plain=True)[0])
 
 
 
 
 
145
  with gr.Column(scale=1):
146
- historyRefreshBtn = gr.Button("🔄 刷新")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
147
 
148
- gr.HTML("""
149
- <div style="text-align: center; margin-top: 20px; margin-bottom: 20px;">
150
- """)
151
  gr.Markdown(description)
152
 
153
  balanceBtn.click(checkBalance, outputs=balanceTxt)
154
- logBtn.click(checkPassword, inputs=pswd, outputs={keyTxt, pswd, logBtn, status_display, balanceTxt})
155
- user_input.submit(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown, use_websearch_checkbox], [chatbot, history, status_display, token_count], show_progress=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
156
  user_input.submit(reset_textbox, [], [user_input])
157
 
158
- submitBtn.click(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown, use_websearch_checkbox], [chatbot, history, status_display, token_count], show_progress=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159
  submitBtn.click(reset_textbox, [], [user_input])
160
 
161
- emptyBtn.click(reset_state, outputs=[chatbot, history, token_count, status_display], show_progress=True)
162
-
163
- retryBtn.click(retry, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown], [chatbot, history, status_display, token_count], show_progress=True)
164
-
165
- delLastBtn.click(delete_last_conversation, [chatbot, history, token_count], [
166
- chatbot, history, token_count, status_display], show_progress=True)
167
-
168
- reduceTokenBtn.click(reduce_token_size, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox, model_select_dropdown], [chatbot, history, status_display, token_count], show_progress=True)
169
-
170
- saveHistoryBtn.click(save_chat_history, [
171
- saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True)
172
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
  saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown])
174
-
 
 
 
 
 
175
  historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown])
176
-
177
- historyFileSelectDropdown.change(load_chat_history, [historyFileSelectDropdown, systemPromptTxt, history, chatbot], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True)
178
-
179
- templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown])
180
-
181
- templateFileSelectDropdown.change(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True)
182
-
183
- templateSelectDropdown.change(get_template_content, [promptTemplates, templateSelectDropdown, systemPromptTxt], [systemPromptTxt], show_progress=True)
184
-
185
- logging.info(colorama.Back.GREEN + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" + colorama.Style.RESET_ALL)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
  # 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接
187
- demo.title = "ChatGPT"
188
-
189
 
190
  if __name__ == "__main__":
191
- #if running in Docker
192
  if dockerflag:
193
  if authflag:
194
- demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password))
 
 
 
195
  else:
196
- demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False)
197
- #if not running in Docker
198
  else:
199
  if authflag:
200
- demo.queue().launch(share=False, auth=(username, password))
201
  else:
202
- demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接
203
- #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口
204
- #demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码
205
- #demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理
 
1
  # -*- coding:utf-8 -*-
 
2
  import os
3
  import logging
4
  import sys
 
 
 
 
5
 
6
+ import gradio as gr
 
 
7
 
8
+ from utils import *
9
+ from presets import *
10
+ from overwrites import *
11
+ from chat_func import *
12
+
13
+ logging.basicConfig(
14
+ level=logging.DEBUG,
15
+ format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
16
+ )
17
+
18
+ my_api_key = "" # 在这里输入你的 API 密钥
19
+ key = os.environ.get("Key")
20
+ # if we are running in Docker
21
+ if os.environ.get("dockerrun") == "yes":
22
  dockerflag = True
23
  else:
24
  dockerflag = False
 
27
 
28
 
29
  def checkPassword(pswdTxt):
30
+ if pswdTxt == os.environ.get("Password"):
31
  logging.info(colorama.Back.BLUE + "\n****密码正确!****" + colorama.Style.RESET_ALL)
32
+ return {keyTxt: key,
33
+ pswd: gr.update(visible=False),
34
+ logBtn: gr.update(visible=False),
35
+ balanceTxt: gr.update(visible=True),
36
+ balanceBtn: gr.update(visible=True),
37
+ status_display: "😎密码对了! 开搞!😎",
38
+ chatbot: [], history: [], token_count: [], status_display: construct_token_message(0)}
39
  else:
40
  logging.info(colorama.Back.RED + "\n****密码尝试错误!****" + colorama.Style.RESET_ALL)
41
  return {keyTxt: "", status_display: "🤔"}
 
44
  def checkBalance():
45
  url = "https://chat-gpt.aurorax.cloud/dashboard/billing/credit_grants"
46
  res = requests.get(url, headers={
47
+ "Authorization": f"Bearer " + key
48
  }, timeout=60).json()
49
+ print(res)
50
  return "$ " + str(round(res['total_available'], 2))
51
 
52
 
53
  if dockerflag:
54
+ my_api_key = os.environ.get("my_api_key")
55
  if my_api_key == "empty":
56
  logging.error("Please give a api key!")
57
  sys.exit(1)
58
+ # auth
59
+ username = os.environ.get("USERNAME")
60
+ password = os.environ.get("PASSWORD")
61
  if not (isinstance(username, type(None)) or isinstance(password, type(None))):
62
  authflag = True
63
  else:
64
+ if (
65
+ not my_api_key
66
+ and os.path.exists("api_key.txt")
67
+ and os.path.getsize("api_key.txt")
68
+ ):
69
  with open("api_key.txt", "r") as f:
70
  my_api_key = f.read().strip()
71
  if os.path.exists("auth.json"):
 
77
  authflag = True
78
 
79
  gr.Chatbot.postprocess = postprocess
80
+ PromptHelper.compact_text_chunks = compact_text_chunks
81
+
82
+ with open("custom.css", "r", encoding="utf-8") as f:
83
+ customCSS = f.read()
84
+
85
+ with gr.Blocks(
86
+ css=customCSS,
87
+ theme=gr.themes.Soft(
88
+ primary_hue=gr.themes.Color(
89
+ c50="#02C160",
90
+ c100="rgba(2, 193, 96, 0.2)",
91
+ c200="#02C160",
92
+ c300="rgba(2, 193, 96, 0.32)",
93
+ c400="rgba(2, 193, 96, 0.32)",
94
+ c500="rgba(2, 193, 96, 1.0)",
95
+ c600="rgba(2, 193, 96, 1.0)",
96
+ c700="rgba(2, 193, 96, 0.32)",
97
+ c800="rgba(2, 193, 96, 0.32)",
98
+ c900="#02C160",
99
+ c950="#02C160",
100
+ ),
101
+ secondary_hue=gr.themes.Color(
102
+ c50="#576b95",
103
+ c100="#576b95",
104
+ c200="#576b95",
105
+ c300="#576b95",
106
+ c400="#576b95",
107
+ c500="#576b95",
108
+ c600="#576b95",
109
+ c700="#576b95",
110
+ c800="#576b95",
111
+ c900="#576b95",
112
+ c950="#576b95",
113
+ ),
114
+ neutral_hue=gr.themes.Color(
115
+ name="gray",
116
+ c50="#f9fafb",
117
+ c100="#f3f4f6",
118
+ c200="#e5e7eb",
119
+ c300="#d1d5db",
120
+ c400="#B2B2B2",
121
+ c500="#808080",
122
+ c600="#636363",
123
+ c700="#515151",
124
+ c800="#393939",
125
+ c900="#272727",
126
+ c950="#171717",
127
+ ),
128
+ radius_size=gr.themes.sizes.radius_sm,
129
+ ).set(
130
+ button_primary_background_fill="#06AE56",
131
+ button_primary_background_fill_dark="#06AE56",
132
+ button_primary_background_fill_hover="#07C863",
133
+ button_primary_border_color="#06AE56",
134
+ button_primary_border_color_dark="#06AE56",
135
+ button_primary_text_color="#FFFFFF",
136
+ button_primary_text_color_dark="#FFFFFF",
137
+ button_secondary_background_fill="#F2F2F2",
138
+ button_secondary_background_fill_dark="#2B2B2B",
139
+ button_secondary_text_color="#393939",
140
+ button_secondary_text_color_dark="#FFFFFF",
141
+ # background_fill_primary="#F7F7F7",
142
+ # background_fill_primary_dark="#1F1F1F",
143
+ block_title_text_color="*primary_500",
144
+ block_title_background_fill="*primary_100",
145
+ input_background_fill="#F6F6F6",
146
+ ),
147
+ ) as demo:
148
  history = gr.State([])
149
  token_count = gr.State([])
150
  promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2))
151
+ user_api_key = gr.State(my_api_key)
152
  TRUECOMSTANT = gr.State(True)
153
  FALSECONSTANT = gr.State(False)
154
  topic = gr.State("未命名对话历史记录")
155
 
156
+ with gr.Row():
157
+ gr.HTML(title)
158
+ status_display = gr.Markdown(get_geoip(), elem_id="status_display")
 
159
 
160
  with gr.Row(scale=1).style(equal_height=True):
 
161
  with gr.Column(scale=5):
162
  with gr.Row(scale=1):
163
+ chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%")
164
  with gr.Row(scale=1):
165
  with gr.Column(scale=12):
166
+ user_input = gr.Textbox(
167
+ show_label=False, placeholder="在这里输入"
168
+ ).style(container=False)
169
+ with gr.Column(min_width=70, scale=1):
170
+ submitBtn = gr.Button("发送", variant="primary")
171
  with gr.Row(scale=1):
172
+ emptyBtn = gr.Button(
173
+ "🧹 新的对话",
174
+ )
175
  retryBtn = gr.Button("🔄 重新生成")
176
+ delLastBtn = gr.Button("🗑️ 删除一条对话")
177
  reduceTokenBtn = gr.Button("♻️ 总结对话")
178
 
 
 
179
  with gr.Column():
180
+ with gr.Column(min_width=50, scale=1):
 
181
  with gr.Tab(label="ChatGPT"):
182
  pswd = gr.Textbox(show_label=True, placeholder=f"Password to access...", type="password",
183
  visible=not HIDE_MY_KEY, label="密码")
184
+ keyTxt = gr.Textbox(
185
+ show_label=True,
186
+ placeholder=f"OpenAI API-key...",
187
+ type="password",
188
+ visible=False,
189
+ label="API-Key",
190
+ )
191
  logBtn = gr.Button("🚀检查密码🚀", variant="primary")
 
 
 
 
 
 
 
 
192
  balanceTxt = gr.Textbox(show_label=True, placeholder=f"Await Checking...", type="text",
193
  visible=False, label="API余额")
194
+ balanceBtn = gr.Button("🔄 刷新", visible=False)
195
+ model_select_dropdown = gr.Dropdown(
196
+ label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0]
197
+ )
198
+ use_streaming_checkbox = gr.Checkbox(
199
+ label="实时传输回答", value=True, visible=enable_streaming_option
200
+ )
201
+ use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False)
202
+ index_files = gr.Files(label="上传索引文件", type="file", multiple=True)
203
 
204
  with gr.Tab(label="Prompt"):
205
+ systemPromptTxt = gr.Textbox(
206
+ show_label=True,
207
+ placeholder=f"在这里输入System Prompt...",
208
+ label="System prompt",
209
+ value=initial_prompt,
210
+ lines=10,
211
+ ).style(container=False)
212
  with gr.Accordion(label="加载Prompt模板", open=True):
213
  with gr.Column():
214
  with gr.Row():
215
  with gr.Column(scale=6):
216
+ templateFileSelectDropdown = gr.Dropdown(
217
+ label="选择Prompt模板集合文件",
218
+ choices=get_template_names(plain=True),
219
+ multiselect=False,
220
+ value=get_template_names(plain=True)[0],
221
+ ).style(container=False)
222
  with gr.Column(scale=1):
223
  templateRefreshBtn = gr.Button("🔄 刷新")
224
  with gr.Row():
225
  with gr.Column():
226
+ templateSelectDropdown = gr.Dropdown(
227
+ label="从Prompt模板中加载",
228
+ choices=load_template(
229
+ get_template_names(plain=True)[0], mode=1
230
+ ),
231
+ multiselect=False,
232
+ value=load_template(
233
+ get_template_names(plain=True)[0], mode=1
234
+ )[0],
235
+ ).style(container=False)
236
 
237
  with gr.Tab(label="保存/加载"):
238
  with gr.Accordion(label="保存/加载对话历史记录", open=True):
239
  with gr.Column():
240
  with gr.Row():
241
  with gr.Column(scale=6):
242
+ historyFileSelectDropdown = gr.Dropdown(
243
+ label="从列表中加载对话",
244
+ choices=get_history_names(plain=True),
245
+ multiselect=False,
246
+ value=get_history_names(plain=True)[0],
247
+ )
248
  with gr.Column(scale=1):
249
+ historyRefreshBtn = gr.Button("🔄 刷新")
250
  with gr.Row():
251
  with gr.Column(scale=6):
252
+ saveFileName = gr.Textbox(
253
+ show_label=True,
254
+ placeholder=f"设置文件名: 默认为.json,可选为.md",
255
+ label="设置保存文件名",
256
+ value="对话历史记录",
257
+ ).style(container=True)
258
  with gr.Column(scale=1):
259
+ saveHistoryBtn = gr.Button("💾 保存对话")
260
+ exportMarkdownBtn = gr.Button("📝 导出为Markdown")
261
+ gr.Markdown("默认保存于history文件夹")
262
+ with gr.Row():
263
+ with gr.Column():
264
+ downloadFile = gr.File(interactive=True)
265
+
266
+ with gr.Tab(label="高级"):
267
+ default_btn = gr.Button("🔙 恢复默认设置")
268
+ gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置")
269
+
270
+ with gr.Accordion("参数", open=False):
271
+ top_p = gr.Slider(
272
+ minimum=-0,
273
+ maximum=1.0,
274
+ value=1.0,
275
+ step=0.05,
276
+ interactive=True,
277
+ label="Top-p",
278
+ )
279
+ temperature = gr.Slider(
280
+ minimum=-0,
281
+ maximum=2.0,
282
+ value=1.0,
283
+ step=0.1,
284
+ interactive=True,
285
+ label="Temperature",
286
+ )
287
+
288
+ apiurlTxt = gr.Textbox(
289
+ show_label=True,
290
+ placeholder=f"在这里输入API地址...",
291
+ label="API地址",
292
+ value="https://api.openai.com/v1/chat/completions",
293
+ lines=2,
294
+ )
295
+ changeAPIURLBtn = gr.Button("🔄 切换API地址")
296
+ proxyTxt = gr.Textbox(
297
+ show_label=True,
298
+ placeholder=f"在这里输入代理地址...",
299
+ label="代理地址(示例:http://127.0.0.1:10809)",
300
+ value="",
301
+ lines=2,
302
+ )
303
+ changeProxyBtn = gr.Button("🔄 设置代理地址")
304
 
 
 
 
305
  gr.Markdown(description)
306
 
307
  balanceBtn.click(checkBalance, outputs=balanceTxt)
308
+ logBtn.click(checkPassword, pswd, [keyTxt, pswd, logBtn, status_display, balanceTxt, balanceBtn,
309
+ chatbot, history, token_count, status_display])
310
+ keyTxt.submit(submit_key, keyTxt, [user_api_key, status_display])
311
+ keyTxt.change(submit_key, keyTxt, [user_api_key, status_display])
312
+ # Chatbot
313
+ user_input.submit(
314
+ predict,
315
+ [
316
+ user_api_key,
317
+ systemPromptTxt,
318
+ history,
319
+ user_input,
320
+ chatbot,
321
+ token_count,
322
+ top_p,
323
+ temperature,
324
+ use_streaming_checkbox,
325
+ model_select_dropdown,
326
+ use_websearch_checkbox,
327
+ index_files,
328
+ ],
329
+ [chatbot, history, status_display, token_count],
330
+ show_progress=True,
331
+ )
332
  user_input.submit(reset_textbox, [], [user_input])
333
 
334
+ submitBtn.click(
335
+ predict,
336
+ [
337
+ user_api_key,
338
+ systemPromptTxt,
339
+ history,
340
+ user_input,
341
+ chatbot,
342
+ token_count,
343
+ top_p,
344
+ temperature,
345
+ use_streaming_checkbox,
346
+ model_select_dropdown,
347
+ use_websearch_checkbox,
348
+ index_files,
349
+ ],
350
+ [chatbot, history, status_display, token_count],
351
+ show_progress=True,
352
+ )
353
  submitBtn.click(reset_textbox, [], [user_input])
354
 
355
+ emptyBtn.click(
356
+ reset_state,
357
+ outputs=[chatbot, history, token_count, status_display],
358
+ show_progress=True,
359
+ )
360
+
361
+ retryBtn.click(
362
+ retry,
363
+ [
364
+ user_api_key,
365
+ systemPromptTxt,
366
+ history,
367
+ chatbot,
368
+ token_count,
369
+ top_p,
370
+ temperature,
371
+ use_streaming_checkbox,
372
+ model_select_dropdown,
373
+ ],
374
+ [chatbot, history, status_display, token_count],
375
+ show_progress=True,
376
+ )
377
+
378
+ delLastBtn.click(
379
+ delete_last_conversation,
380
+ [chatbot, history, token_count],
381
+ [chatbot, history, token_count, status_display],
382
+ show_progress=True,
383
+ )
384
+
385
+ reduceTokenBtn.click(
386
+ reduce_token_size,
387
+ [
388
+ user_api_key,
389
+ systemPromptTxt,
390
+ history,
391
+ chatbot,
392
+ token_count,
393
+ top_p,
394
+ temperature,
395
+ gr.State(0),
396
+ model_select_dropdown,
397
+ ],
398
+ [chatbot, history, status_display, token_count],
399
+ show_progress=True,
400
+ )
401
+
402
+ # Template
403
+ templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown])
404
+ templateFileSelectDropdown.change(
405
+ load_template,
406
+ [templateFileSelectDropdown],
407
+ [promptTemplates, templateSelectDropdown],
408
+ show_progress=True,
409
+ )
410
+ templateSelectDropdown.change(
411
+ get_template_content,
412
+ [promptTemplates, templateSelectDropdown, systemPromptTxt],
413
+ [systemPromptTxt],
414
+ show_progress=True,
415
+ )
416
+
417
+ # S&L
418
+ saveHistoryBtn.click(
419
+ save_chat_history,
420
+ [saveFileName, systemPromptTxt, history, chatbot],
421
+ downloadFile,
422
+ show_progress=True,
423
+ )
424
  saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown])
425
+ exportMarkdownBtn.click(
426
+ export_markdown,
427
+ [saveFileName, systemPromptTxt, history, chatbot],
428
+ downloadFile,
429
+ show_progress=True,
430
+ )
431
  historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown])
432
+ historyFileSelectDropdown.change(
433
+ load_chat_history,
434
+ [historyFileSelectDropdown, systemPromptTxt, history, chatbot],
435
+ [saveFileName, systemPromptTxt, history, chatbot],
436
+ show_progress=True,
437
+ )
438
+ downloadFile.change(
439
+ load_chat_history,
440
+ [downloadFile, systemPromptTxt, history, chatbot],
441
+ [saveFileName, systemPromptTxt, history, chatbot],
442
+ )
443
+
444
+ # Advanced
445
+ default_btn.click(
446
+ reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True
447
+ )
448
+ changeAPIURLBtn.click(
449
+ change_api_url,
450
+ [apiurlTxt],
451
+ [status_display],
452
+ show_progress=True,
453
+ )
454
+ changeProxyBtn.click(
455
+ change_proxy,
456
+ [proxyTxt],
457
+ [status_display],
458
+ show_progress=True,
459
+ )
460
+
461
+ logging.info(
462
+ colorama.Back.GREEN
463
+ + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面"
464
+ + colorama.Style.RESET_ALL
465
+ )
466
  # 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接
467
+ demo.title = "🚀 ChatGPT API 🚀"
 
468
 
469
  if __name__ == "__main__":
470
+ # if running in Docker
471
  if dockerflag:
472
  if authflag:
473
+ demo.queue().launch(
474
+ server_name="0.0.0.0", server_port=7860, auth=(username, password),
475
+ favicon_path="./assets/favicon.png"
476
+ )
477
  else:
478
+ demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False, favicon_path="./assets/favicon.png")
479
+ # if not running in Docker
480
  else:
481
  if authflag:
482
+ demo.queue().launch(share=False, auth=(username, password), favicon_path="./assets/favicon.png")
483
  else:
484
+ demo.queue().launch(share=False, favicon_path="./assets/favicon.png") # 改为 share=True 可以创建公开分享链接
485
+ # demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口
486
+ # demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码
487
+ # demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理
README.md CHANGED
@@ -1,9 +1,452 @@
1
- ---
2
- title: ChatGPT
3
- sdk: gradio
4
- emoji: 🚀
5
- colorFrom: blue
6
- colorTo: pink
7
- app_file: ChuanhuChatbot.py
8
- pinned: false
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">川虎 ChatGPT 🐯 Chuanhu ChatGPT</h1>
2
+ <div align="center">
3
+ <a href="https://github.com/GaiZhenBiao/ChuanhuChatGPT">
4
+ <img src="https://user-images.githubusercontent.com/70903329/226267132-e5295925-f53a-4e9d-a221-6099583da98d.png" alt="Logo" height="156">
5
+ </a>
6
+
7
+ <p align="center">
8
+ <h3>为ChatGPT API提供了一个轻快好用的Web图形界面</h3>
9
+ <p align="center">
10
+ <a href="https://github.com/GaiZhenbiao/ChuanhuChatGPT/blob/main/LICENSE">
11
+ <img alt="Tests Passing" src="https://img.shields.io/github/license/GaiZhenbiao/ChuanhuChatGPT" />
12
+ </a>
13
+ <a href="https://gradio.app/">
14
+ <img alt="GitHub Contributors" src="https://img.shields.io/badge/Base-Gradio-fb7d1a?style=flat" />
15
+ </a>
16
+ <a href="https://github.com/GaiZhenBiao/ChuanhuChatGPT/graphs/contributors">
17
+ <img alt="GitHub Contributors" src="https://img.shields.io/github/contributors/GaiZhenBiao/ChuanhuChatGPT" />
18
+ </a>
19
+ <a href="https://github.com/GaiZhenBiao/ChuanhuChatGPT/issues">
20
+ <img alt="Issues" src="https://img.shields.io/github/issues/GaiZhenBiao/ChuanhuChatGPT?color=0088ff" />
21
+ </a>
22
+ <a href="https://github.com/GaiZhenBiao/ChuanhuChatGPT/pulls">
23
+ <img alt="GitHub pull requests" src="https://img.shields.io/github/issues-pr/GaiZhenBiao/ChuanhuChatGPT?color=0088ff" />
24
+ </a>
25
+ <p>
26
+ 实时回复 / 无限对话 / 保存对话记录 / 预设Prompt集 / 联网搜索 / 根据文件回答
27
+ <br/>
28
+ 渲染LaTex / 渲染表格 / 渲染代码 / 代码高亮 / 自定义api-URL / “小而美”的体验 / Ready for GPT-4
29
+ </p>
30
+ <a href="https://www.bilibili.com/video/BV1mo4y1r7eE"><strong>视频教程</strong></a>
31
+ ·
32
+ <a href="https://www.bilibili.com/video/BV1184y1w7aP"><strong>2.0介绍视频</strong></a>
33
+ ·
34
+ <a href="https://huggingface.co/spaces/JohnSmith9982/ChuanhuChatGPT"><strong>在线体验</strong></a>
35
+ </p>
36
+ <p align="center">
37
+ <img alt="Animation Demo" src="https://user-images.githubusercontent.com/51039745/226255695-6b17ff1f-ea8d-464f-b69b-a7b6b68fffe8.gif" />
38
+ </p>
39
+ </p>
40
+ </div>
41
+
42
+ ## 目录
43
+ |[使用技巧](#使用技巧)|[安装方式](#安装方式)|[疑难杂症解决](#疑难杂症解决)| [给作者买可乐🥤](#捐款) |
44
+ | ---- | ---- | ---- | --- |
45
+
46
+ ## 使用技巧
47
+
48
+ - 使用System Prompt可以很有效地设定前提条件。
49
+ - 使用Prompt模板功能时,选择Prompt模板集合文件,然后从下拉菜单中选择想要的prompt。
50
+ - 如果回答不满意,可以使用`重新生成`按钮再试一次
51
+ - 对于长对话,可以使用`优化Tokens`按钮减少Tokens占用。
52
+ - 输入框支持换行,按`shift enter`即可。
53
+ - 部署到服务器:将程序最后一句改成`demo.launch(server_name="0.0.0.0", server_port=<你的端口号>)`。
54
+ - 获取公共链接:将程序最后一句改成`demo.launch(share=True)`。注意程序必须在运行,才能通过公共链接访问。
55
+ - 在Hugging Face上使用:建议在右上角 **复制Space** 再使用,这样
56
+
57
+
58
+ ## 安装方式
59
+
60
+ ### 直接在Hugging Face上部署
61
+
62
+ 访问[本项目的Hugging Face页面](https://huggingface.co/spaces/JohnSmith9982/ChuanhuChatGPT),点击右上角的 **复制Space** ,新建一个私人空间。然后就直接可以开始使用啦!放心,这是免费的。
63
+
64
+ 注意不要直接使用我的Space,否则排队速度会很漫长。在你的私人空间里使用能大大减少排队时间,App反应也会更加迅速。
65
+
66
+ <img width="300" alt="image" src="https://user-images.githubusercontent.com/51039745/223447310-e098a1f2-0dcf-48d6-bcc5-49472dd7ca0d.png">
67
+
68
+ Hugging Face的优点:部署容易,甚至不需要电脑。免费。无需配置代理。
69
+
70
+ Hugging Face的缺点:支持的gradio版本比较老旧,不支持最新的界面。
71
+
72
+ ### 本地部署
73
+
74
+ 1. **下载本项目**
75
+
76
+ ```shell
77
+ git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git
78
+ cd ChuanhuChatGPT
79
+ ```
80
+ 或者,点击网页右上角的 `Download ZIP`,下载并解压完成后进入文件夹,进入`终端`或`命令提示符`。
81
+
82
+ 如果你使用Windows,应该在文件夹里按住`shift`右键,选择“在终端中打开”。如果没有这个选项,选择“在此处打开Powershell窗口”。如果你使用macOS,可以在Finder底部的路径栏中右键当前文件夹,选择`服务-新建位于文件夹位置的终端标签页`。
83
+
84
+ <img width="200" alt="downloadZIP" src="https://user-images.githubusercontent.com/23137268/223696317-b89d2c71-c74d-4c6d-8060-a21406cfb8c8.png">
85
+
86
+ 2. **填写API密钥**
87
+
88
+ 以下3种方法任选其一:
89
+
90
+ <details><summary>1. 在图形界面中填写你的API密钥</summary>
91
+
92
+ 这样设置的密钥会在页面刷新后被清除。
93
+
94
+ <img width="760" alt="image" src="https://user-images.githubusercontent.com/51039745/222873756-3858bb82-30b9-49bc-9019-36e378ee624d.png"></details>
95
+ <details><summary>2. 在直接代码中填入你的 OpenAI API 密钥</summary>
96
+
97
+ 这样设置的密钥会成为默认密钥。在这里还可以选择是否在UI中隐藏密钥输入框。
98
+
99
+ <img width="525" alt="image" src="https://user-images.githubusercontent.com/51039745/223440375-d472de4b-aa7f-4eae-9170-6dc2ed9f5480.png"></details>
100
+
101
+ <details><summary>3. 在文件中设定默认密钥、用户名密码</summary>
102
+
103
+ 这样设置的密钥可以在拉取项目更新之后保留。
104
+
105
+ 在项目文件夹中新建这两个文件:`api_key.txt` 和 `auth.json`。
106
+
107
+ 在`api_key.txt`中填写你的API-Key,注意不要填写任何无关内容。
108
+
109
+ 在`auth.json`中填写你的用户名和密码。
110
+
111
+ ```
112
+ {
113
+ "username": "用户名",
114
+ "password": "密码"
115
+ }
116
+ ```
117
+
118
+ </details>
119
+
120
+ 3. **安装依赖**
121
+
122
+ 在终端中输入下面的命令,然后回车。
123
+
124
+ ```shell
125
+ pip install -r requirements.txt
126
+ ```
127
+
128
+ 如果报错,试试
129
+
130
+ ```shell
131
+ pip3 install -r requirements.txt
132
+ ```
133
+
134
+ 如果还是不行,请先[安装Python](https://www.runoob.com/python/python-install.html)。
135
+
136
+ 如果下载慢,建议[配置清华源](https://mirrors.tuna.tsinghua.edu.cn/help/pypi/),或者科学上网。
137
+
138
+ 4. **启动**
139
+
140
+ 请使用下面的命令。
141
+
142
+ ```shell
143
+ python ChuanhuChatbot.py
144
+ ```
145
+
146
+ 如果报错,试试
147
+
148
+ ```shell
149
+ python3 ChuanhuChatbot.py
150
+ ```
151
+
152
+ 如果还是不行,请先[安装Python](https://www.runoob.com/python/python-install.html)。
153
+ <br />
154
+
155
+ 如果一切顺利,现在,你应该已经可以在浏览器地址栏中输入 [`http://localhost:7860`](http://localhost:7860) 查看并使用 ChuanhuChatGPT 了。
156
+
157
+ **如果你在安装过程中碰到了问题,请先查看[疑难杂症解决](#疑难杂症解决)部分。**
158
+
159
+ ### 使用Docker运行
160
+
161
+ <details><summary>如果觉得以上方法比较麻烦,我们提供了Docker镜像</summary>
162
+
163
+ #### 拉取镜像
164
+
165
+ ```shell
166
+ docker pull tuchuanhuhuhu/chuanhuchatgpt:latest
167
+ ```
168
+
169
+ #### 运行
170
+
171
+ ```shell
172
+ docker run -d --name chatgpt \
173
+ -e my_api_key="替换成API" \
174
+ -e USERNAME="替换成用户名" \
175
+ -e PASSWORD="替换成密码" \
176
+ -v ~/chatGPThistory:/app/history \
177
+ -p 7860:7860 \
178
+ tuchuanhuhuhu/chuanhuchatgpt:latest
179
+ ```
180
+
181
+ 注:`USERNAME` 和 `PASSWORD` 两行可省略。若省略则不会启用认证。
182
+
183
+ #### 查看运行状态
184
+ ```shell
185
+ docker logs chatgpt
186
+ ```
187
+
188
+ #### 也可修改脚本后手动构建镜像
189
+
190
+ ```shell
191
+ docker build -t chuanhuchatgpt:latest .
192
+ ```
193
+ </details>
194
+
195
+
196
+ ### 远程部署
197
+
198
+ <details><summary>如果需要在公网服务器部署本项目,请阅读该部分</summary>
199
+
200
+ ### 部署到公网服务器
201
+
202
+ 将最后一句修改为
203
+
204
+ ```
205
+ demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口
206
+ ```
207
+ ### 用账号密码保护页面
208
+
209
+ 将最后一句修改为
210
+
211
+ ```
212
+ demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码
213
+ ```
214
+
215
+ ### 配置 Nginx 反向代理
216
+
217
+ 注意:配置反向代理不是必须的。如果需要使用域名,则需要配置 Nginx 反向代理。
218
+
219
+ 又及:目前配置认证后,Nginx 必须配置 SSL,否则会出现 [Cookie 不匹配问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/89)。
220
+
221
+ 添加独立配置文件:
222
+ ```nginx
223
+ server {
224
+ listen 80;
225
+ server_name /域名/; # 请填入你设定的域名
226
+ access_log off;
227
+ error_log off;
228
+ location / {
229
+ proxy_pass http://127.0.0.1:7860; # 注意端口号
230
+ proxy_redirect off;
231
+ proxy_set_header Host $host;
232
+ proxy_set_header X-Real-IP $remote_addr;
233
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
234
+ proxy_set_header Upgrade $http_upgrade; # Websocket配置
235
+ proxy_set_header Connection $connection_upgrade; #Websocket配置
236
+ proxy_max_temp_file_size 0;
237
+ client_max_body_size 10m;
238
+ client_body_buffer_size 128k;
239
+ proxy_connect_timeout 90;
240
+ proxy_send_timeout 90;
241
+ proxy_read_timeout 90;
242
+ proxy_buffer_size 4k;
243
+ proxy_buffers 4 32k;
244
+ proxy_busy_buffers_size 64k;
245
+ proxy_temp_file_write_size 64k;
246
+ }
247
+ }
248
+ ```
249
+
250
+ 修改`nginx.conf`配置文件(通常在`/etc/nginx/nginx.conf`),向http部分添加如下配置:
251
+ (这一步是为了配置websocket连接,如之前配置过可忽略)
252
+ ```nginx
253
+ map $http_upgrade $connection_upgrade {
254
+ default upgrade;
255
+ '' close;
256
+ }
257
+ ```
258
+
259
+ 为了同时配置域名访问和身份认证,需要配置SSL的证书,可以参考[这篇博客](https://www.gzblog.tech/2020/12/25/how-to-config-hexo/#%E9%85%8D%E7%BD%AEHTTPS)一键配置
260
+
261
+
262
+ ### 全程使用Docker 为ChuanhuChatGPT 开启HTTPS
263
+
264
+ 如果你的VPS 80端口与443端口没有被占用,则可以考虑如下的方法,只需要将你的域名提前绑定到你的VPS 的IP即可。此方法由[@iskoldt-X](https://github.com/iskoldt-X) 提供。
265
+
266
+ 首先,运行[nginx-proxy](https://github.com/nginx-proxy/nginx-proxy)
267
+
268
+ ```
269
+ docker run --detach \
270
+ --name nginx-proxy \
271
+ --publish 80:80 \
272
+ --publish 443:443 \
273
+ --volume certs:/etc/nginx/certs \
274
+ --volume vhost:/etc/nginx/vhost.d \
275
+ --volume html:/usr/share/nginx/html \
276
+ --volume /var/run/docker.sock:/tmp/docker.sock:ro \
277
+ nginxproxy/nginx-proxy
278
+ ```
279
+ 接着,运行[acme-companion](https://github.com/nginx-proxy/acme-companion),这是用来自动申请TLS 证书的容器
280
+
281
+ ```
282
+ docker run --detach \
283
+ --name nginx-proxy-acme \
284
+ --volumes-from nginx-proxy \
285
+ --volume /var/run/docker.sock:/var/run/docker.sock:ro \
286
+ --volume acme:/etc/acme.sh \
287
+ --env "DEFAULT_EMAIL=你的邮箱(用于申请TLS 证书)" \
288
+ nginxproxy/acme-companion
289
+ ```
290
+
291
+ 最后,可以运行ChuanhuChatGPT
292
+ ```
293
+ docker run -d --name chatgpt \
294
+ -e my_api_key="你的API" \
295
+ -e USERNAME="替换成用户名" \
296
+ -e PASSWORD="替换成密码" \
297
+ -v ~/chatGPThistory:/app/history \
298
+ -e VIRTUAL_HOST=你的域名 \
299
+ -e VIRTUAL_PORT=7860 \
300
+ -e LETSENCRYPT_HOST=你的域名 \
301
+ tuchuanhuhuhu/chuanhuchatgpt:latest
302
+ ```
303
+ 如此即可为ChuanhuChatGPT实现自动申请TLS证书并且开启HTTPS
304
+ </details>
305
+
306
+ ---
307
+
308
+ ## 疑难杂症解决
309
+
310
+ 首先,请先尝试拉取本项目的最新更改,使用最新的代码重试。
311
+
312
+ 点击网页上的 `Download ZIP` 下载最新代码,或
313
+ ```shell
314
+ git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f
315
+ ```
316
+
317
+ 如果还有问题,可以再尝试重装 gradio:
318
+
319
+ ```
320
+ pip install gradio --upgrade --force-reinstall
321
+ ```
322
+
323
+ 很多时候,这样就可以解决问题。
324
+
325
+ ### 常见问题
326
+
327
+ <details><summary>配置代理</summary>
328
+
329
+ OpenAI不允许在不受支持的地区使用API,否则可能会导致账号被风控。下面给出代理配置示例:
330
+
331
+ 在Clash配置文件中,加入:
332
+
333
+ ```
334
+ rule-providers:
335
+ private:
336
+ type: http
337
+ behavior: domain
338
+ url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/lancidr.txt"
339
+ path: ./ruleset/ads.yaml
340
+ interval: 86400
341
+
342
+ rules:
343
+ - RULE-SET,private,DIRECT
344
+ - DOMAIN-SUFFIX,openai.com,你的代理规则
345
+ ```
346
+
347
+ 如果你使用 Surge,请在配置文件中加入:
348
+
349
+ ```
350
+ [Rule]
351
+ DOMAIN-SET,https://cdn.jsdelivr.net/gh/Loyalsoldier/surge-rules@release/private.txt,DIRECT
352
+ DOMAIN-SUFFIX,openai.com,你的代理规则
353
+ ```
354
+ 注意,如果你本来已经有对应的字段,请将这些规则合并到已有字段中,否则代理软件会报错。
355
+
356
+ </details>
357
+
358
+ <details><summary><code>TypeError: Base.set () got an unexpected keyword argument</code></summary>
359
+
360
+ 这是因为川虎ChatGPT紧跟Gradio发展步伐,你的Gradio版本太旧了。请升级依赖:
361
+
362
+ ```
363
+ pip install -r requirements.txt --upgrade
364
+ ```
365
+ </details>
366
+
367
+ <details><summary><code>No module named '_bz2'</code></summary>
368
+
369
+ > 部署在CentOS7.6,Python3.11.0上,最后报错ModuleNotFoundError: No module named '_bz2'
370
+
371
+ 安装python前先下载 `bzip` 编译环境
372
+
373
+ ```
374
+ sudo yum install bzip2-devel
375
+ ```
376
+ </details>
377
+
378
+ <details><summary><code>openai.error.APIConnectionError</code></summary>
379
+
380
+ > 如果有人也出现了`openai.error.APIConnectionError`提示的报错,那可能是`urllib3`的版本导致的。`urllib3`版本大于`1.25.11`,就会出现这个问题。
381
+ >
382
+ > 解决方案是卸载`urllib3`然后重装至`1.25.11`版本再重新运行一遍就可以
383
+
384
+ 参见:[#5](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/5)
385
+
386
+ 在终端或命令提示符中卸载`urllib3`
387
+
388
+ ```
389
+ pip uninstall urllib3
390
+ ```
391
+
392
+ 然后,通过使用指定版本号的`pip install`命令来安装所需的版本:
393
+
394
+ ```
395
+ pip install urllib3==1.25.11
396
+ ```
397
+
398
+ 参考自:
399
+ [解决OpenAI API 挂了代理还是连接不上的问题](https://zhuanlan.zhihu.com/p/611080662)
400
+ </details>
401
+
402
+ <details><summary><code>在 Python 文件里 设定 API Key 之后验证失败</code></summary>
403
+
404
+ > 在ChuanhuChatbot.py中设置APIkey后验证出错,提示“发生了未知错误Orz”
405
+
406
+ 参见:[#26](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/26)
407
+ </details>
408
+
409
+ <details><summary><code>一直等待/SSL Error</code></summary>
410
+
411
+ > 更新脚本文件后,SSLError [#49](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/49)
412
+ >
413
+ > 跑起来之后,输入问题好像就没反应了,也没报错 [#25](https://github.com/GaiZhenbiao/ChuanhuChatGPT/issues/25)
414
+ >
415
+ > ```
416
+ > requests.exceptions.SSLError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))
417
+ > ```
418
+
419
+ 请参考配置代理部分,将`openai.com`加入你使用的代理App的代理规则。注意不要将`127.0.0.1`加入代理,否则会有下一个错误。
420
+
421
+ </details>
422
+
423
+ <details><summary><code>网页提示错误 Something went wrong</code></summary>
424
+
425
+ > ```
426
+ > Something went wrong
427
+ > Expecting value: 1ine 1 column 1 (char o)
428
+ > ```
429
+
430
+ 出现这个错误的原因是`127.0.0.1`被代理了,导致网页无法和后端通信。请设置代理软件,将`127.0.0.1`加��直连(具体方法见上面“一直等待/SSL Error”部分)。
431
+ </details>
432
+
433
+ <details><summary><code>No matching distribution found for openai>=0.27.0</code></summary>
434
+
435
+ `openai`这个依赖已经被移除了。请尝试下载最新版脚本。
436
+ </details>
437
+
438
+ ## Starchart
439
+
440
+ [![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date)
441
+
442
+ ## Contributors
443
+
444
+ <a href="https://github.com/GaiZhenbiao/ChuanhuChatGPT/graphs/contributors">
445
+ <img src="https://contrib.rocks/image?repo=GaiZhenbiao/ChuanhuChatGPT" />
446
+ </a>
447
+
448
+ ## 捐款
449
+
450
+ 🐯请作者喝可乐~
451
+
452
+ <img width="350" alt="image" src="https://user-images.githubusercontent.com/51039745/223626874-f471e5f5-8a06-43d5-aa31-9d2575b6f631.JPG">
assets/favicon.png ADDED
chat_func.py ADDED
@@ -0,0 +1,456 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding:utf-8 -*-
2
+ from __future__ import annotations
3
+ from typing import TYPE_CHECKING, List
4
+
5
+ import logging
6
+ import json
7
+ import os
8
+ import requests
9
+ import urllib3
10
+
11
+ from tqdm import tqdm
12
+ import colorama
13
+ from duckduckgo_search import ddg
14
+ import asyncio
15
+ import aiohttp
16
+
17
+ from presets import *
18
+ from llama_func import *
19
+ from utils import *
20
+
21
+ # logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s")
22
+
23
+ if TYPE_CHECKING:
24
+ from typing import TypedDict
25
+
26
+ class DataframeData(TypedDict):
27
+ headers: List[str]
28
+ data: List[List[str | int | bool]]
29
+
30
+
31
+ initial_prompt = "You are a helpful assistant."
32
+ API_URL = "https://api.openai.com/v1/chat/completions"
33
+ HISTORY_DIR = "history"
34
+ TEMPLATES_DIR = "templates"
35
+
36
+ def get_response(
37
+ openai_api_key, system_prompt, history, temperature, top_p, stream, selected_model
38
+ ):
39
+ headers = {
40
+ "Content-Type": "application/json",
41
+ "Authorization": f"Bearer {openai_api_key}",
42
+ }
43
+
44
+ history = [construct_system(system_prompt), *history]
45
+
46
+ payload = {
47
+ "model": selected_model,
48
+ "messages": history, # [{"role": "user", "content": f"{inputs}"}],
49
+ "temperature": temperature, # 1.0,
50
+ "top_p": top_p, # 1.0,
51
+ "n": 1,
52
+ "stream": stream,
53
+ "presence_penalty": 0,
54
+ "frequency_penalty": 0,
55
+ }
56
+ if stream:
57
+ timeout = timeout_streaming
58
+ else:
59
+ timeout = timeout_all
60
+
61
+ # 获取环境变量中的代理设置
62
+ http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy")
63
+ https_proxy = os.environ.get("HTTPS_PROXY") or os.environ.get("https_proxy")
64
+
65
+ # 如果存在代理设置,使用它们
66
+ proxies = {}
67
+ if http_proxy:
68
+ logging.info(f"Using HTTP proxy: {http_proxy}")
69
+ proxies["http"] = http_proxy
70
+ if https_proxy:
71
+ logging.info(f"Using HTTPS proxy: {https_proxy}")
72
+ proxies["https"] = https_proxy
73
+
74
+ # 如果有代理,使用代理发送请求,否则使用默认设置发送请求
75
+ if proxies:
76
+ response = requests.post(
77
+ API_URL,
78
+ headers=headers,
79
+ json=payload,
80
+ stream=True,
81
+ timeout=timeout,
82
+ proxies=proxies,
83
+ )
84
+ else:
85
+ response = requests.post(
86
+ API_URL,
87
+ headers=headers,
88
+ json=payload,
89
+ stream=True,
90
+ timeout=timeout,
91
+ )
92
+ return response
93
+
94
+
95
+ def stream_predict(
96
+ openai_api_key,
97
+ system_prompt,
98
+ history,
99
+ inputs,
100
+ chatbot,
101
+ all_token_counts,
102
+ top_p,
103
+ temperature,
104
+ selected_model,
105
+ fake_input=None,
106
+ display_append=""
107
+ ):
108
+ def get_return_value():
109
+ return chatbot, history, status_text, all_token_counts
110
+
111
+ logging.info("实时回答模式")
112
+ partial_words = ""
113
+ counter = 0
114
+ status_text = "开始实时传输回答……"
115
+ history.append(construct_user(inputs))
116
+ history.append(construct_assistant(""))
117
+ if fake_input:
118
+ chatbot.append((fake_input, ""))
119
+ else:
120
+ chatbot.append((inputs, ""))
121
+ user_token_count = 0
122
+ if len(all_token_counts) == 0:
123
+ system_prompt_token_count = count_token(construct_system(system_prompt))
124
+ user_token_count = (
125
+ count_token(construct_user(inputs)) + system_prompt_token_count
126
+ )
127
+ else:
128
+ user_token_count = count_token(construct_user(inputs))
129
+ all_token_counts.append(user_token_count)
130
+ logging.info(f"输入token计数: {user_token_count}")
131
+ yield get_return_value()
132
+ try:
133
+ response = get_response(
134
+ openai_api_key,
135
+ system_prompt,
136
+ history,
137
+ temperature,
138
+ top_p,
139
+ True,
140
+ selected_model,
141
+ )
142
+ except requests.exceptions.ConnectTimeout:
143
+ status_text = (
144
+ standard_error_msg + connection_timeout_prompt + error_retrieve_prompt
145
+ )
146
+ yield get_return_value()
147
+ return
148
+ except requests.exceptions.ReadTimeout:
149
+ status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt
150
+ yield get_return_value()
151
+ return
152
+
153
+ yield get_return_value()
154
+ error_json_str = ""
155
+
156
+ for chunk in tqdm(response.iter_lines()):
157
+ if counter == 0:
158
+ counter += 1
159
+ continue
160
+ counter += 1
161
+ # check whether each line is non-empty
162
+ if chunk:
163
+ chunk = chunk.decode()
164
+ chunklength = len(chunk)
165
+ try:
166
+ chunk = json.loads(chunk[6:])
167
+ except json.JSONDecodeError:
168
+ logging.info(chunk)
169
+ error_json_str += chunk
170
+ status_text = f"JSON解析错误。请重置对话。收到的内容: {error_json_str}"
171
+ yield get_return_value()
172
+ continue
173
+ # decode each line as response data is in bytes
174
+ if chunklength > 6 and "delta" in chunk["choices"][0]:
175
+ finish_reason = chunk["choices"][0]["finish_reason"]
176
+ status_text = construct_token_message(
177
+ sum(all_token_counts), stream=True
178
+ )
179
+ if finish_reason == "stop":
180
+ yield get_return_value()
181
+ break
182
+ try:
183
+ partial_words = (
184
+ partial_words + chunk["choices"][0]["delta"]["content"]
185
+ )
186
+ except KeyError:
187
+ status_text = (
188
+ standard_error_msg
189
+ + "API回复中找不到内容。很可能是Token计数达到上限了。请重置对话。当前Token计数: "
190
+ + str(sum(all_token_counts))
191
+ )
192
+ yield get_return_value()
193
+ break
194
+ history[-1] = construct_assistant(partial_words)
195
+ chatbot[-1] = (chatbot[-1][0], partial_words+display_append)
196
+ all_token_counts[-1] += 1
197
+ yield get_return_value()
198
+
199
+
200
+ def predict_all(
201
+ openai_api_key,
202
+ system_prompt,
203
+ history,
204
+ inputs,
205
+ chatbot,
206
+ all_token_counts,
207
+ top_p,
208
+ temperature,
209
+ selected_model,
210
+ fake_input=None,
211
+ display_append=""
212
+ ):
213
+ logging.info("一次性回答模式")
214
+ history.append(construct_user(inputs))
215
+ history.append(construct_assistant(""))
216
+ if fake_input:
217
+ chatbot.append((fake_input, ""))
218
+ else:
219
+ chatbot.append((inputs, ""))
220
+ all_token_counts.append(count_token(construct_user(inputs)))
221
+ try:
222
+ response = get_response(
223
+ openai_api_key,
224
+ system_prompt,
225
+ history,
226
+ temperature,
227
+ top_p,
228
+ False,
229
+ selected_model,
230
+ )
231
+ except requests.exceptions.ConnectTimeout:
232
+ status_text = (
233
+ standard_error_msg + connection_timeout_prompt + error_retrieve_prompt
234
+ )
235
+ return chatbot, history, status_text, all_token_counts
236
+ except requests.exceptions.ProxyError:
237
+ status_text = standard_error_msg + proxy_error_prompt + error_retrieve_prompt
238
+ return chatbot, history, status_text, all_token_counts
239
+ except requests.exceptions.SSLError:
240
+ status_text = standard_error_msg + ssl_error_prompt + error_retrieve_prompt
241
+ return chatbot, history, status_text, all_token_counts
242
+ response = json.loads(response.text)
243
+ content = response["choices"][0]["message"]["content"]
244
+ history[-1] = construct_assistant(content)
245
+ chatbot[-1] = (chatbot[-1][0], content+display_append)
246
+ total_token_count = response["usage"]["total_tokens"]
247
+ all_token_counts[-1] = total_token_count - sum(all_token_counts)
248
+ status_text = construct_token_message(total_token_count)
249
+ return chatbot, history, status_text, all_token_counts
250
+
251
+
252
+ def predict(
253
+ openai_api_key,
254
+ system_prompt,
255
+ history,
256
+ inputs,
257
+ chatbot,
258
+ all_token_counts,
259
+ top_p,
260
+ temperature,
261
+ stream=False,
262
+ selected_model=MODELS[0],
263
+ use_websearch=False,
264
+ files = None,
265
+ should_check_token_count=True,
266
+ ): # repetition_penalty, top_k
267
+ logging.info("输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL)
268
+ if files:
269
+ msg = "构建索引中……(这可能需要比较久的时间)"
270
+ logging.info(msg)
271
+ yield chatbot, history, msg, all_token_counts
272
+ index = construct_index(openai_api_key, file_src=files)
273
+ msg = "索引构建完成,获取回答中……"
274
+ yield chatbot, history, msg, all_token_counts
275
+ history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot)
276
+ yield chatbot, history, status_text, all_token_counts
277
+ return
278
+
279
+ old_inputs = ""
280
+ link_references = []
281
+ if use_websearch:
282
+ search_results = ddg(inputs, max_results=5)
283
+ old_inputs = inputs
284
+ web_results = []
285
+ for idx, result in enumerate(search_results):
286
+ logging.info(f"搜索结果{idx + 1}:{result}")
287
+ domain_name = urllib3.util.parse_url(result["href"]).host
288
+ web_results.append(f'[{idx+1}]"{result["body"]}"\nURL: {result["href"]}')
289
+ link_references.append(f"{idx+1}. [{domain_name}]({result['href']})\n")
290
+ link_references = "\n\n" + "".join(link_references)
291
+ inputs = (
292
+ replace_today(WEBSEARCH_PTOMPT_TEMPLATE)
293
+ .replace("{query}", inputs)
294
+ .replace("{web_results}", "\n\n".join(web_results))
295
+ )
296
+ else:
297
+ link_references = ""
298
+
299
+ if len(openai_api_key) != 51:
300
+ status_text = standard_error_msg + no_apikey_msg
301
+ logging.info(status_text)
302
+ chatbot.append((inputs, ""))
303
+ if len(history) == 0:
304
+ history.append(construct_user(inputs))
305
+ history.append("")
306
+ all_token_counts.append(0)
307
+ else:
308
+ history[-2] = construct_user(inputs)
309
+ yield chatbot, history, status_text, all_token_counts
310
+ return
311
+
312
+ yield chatbot, history, "开始生成回答……", all_token_counts
313
+
314
+ if stream:
315
+ logging.info("使用流式传输")
316
+ iter = stream_predict(
317
+ openai_api_key,
318
+ system_prompt,
319
+ history,
320
+ inputs,
321
+ chatbot,
322
+ all_token_counts,
323
+ top_p,
324
+ temperature,
325
+ selected_model,
326
+ fake_input=old_inputs,
327
+ display_append=link_references
328
+ )
329
+ for chatbot, history, status_text, all_token_counts in iter:
330
+ yield chatbot, history, status_text, all_token_counts
331
+ else:
332
+ logging.info("不使用流式传输")
333
+ chatbot, history, status_text, all_token_counts = predict_all(
334
+ openai_api_key,
335
+ system_prompt,
336
+ history,
337
+ inputs,
338
+ chatbot,
339
+ all_token_counts,
340
+ top_p,
341
+ temperature,
342
+ selected_model,
343
+ fake_input=old_inputs,
344
+ display_append=link_references
345
+ )
346
+ yield chatbot, history, status_text, all_token_counts
347
+
348
+ logging.info(f"传输完毕。当前token计数为{all_token_counts}")
349
+ if len(history) > 1 and history[-1]["content"] != inputs:
350
+ logging.info(
351
+ "回答为:"
352
+ + colorama.Fore.BLUE
353
+ + f"{history[-1]['content']}"
354
+ + colorama.Style.RESET_ALL
355
+ )
356
+
357
+ if stream:
358
+ max_token = max_token_streaming
359
+ else:
360
+ max_token = max_token_all
361
+
362
+ if sum(all_token_counts) > max_token and should_check_token_count:
363
+ status_text = f"精简token中{all_token_counts}/{max_token}"
364
+ logging.info(status_text)
365
+ yield chatbot, history, status_text, all_token_counts
366
+ iter = reduce_token_size(
367
+ openai_api_key,
368
+ system_prompt,
369
+ history,
370
+ chatbot,
371
+ all_token_counts,
372
+ top_p,
373
+ temperature,
374
+ max_token//2,
375
+ selected_model=selected_model,
376
+ )
377
+ for chatbot, history, status_text, all_token_counts in iter:
378
+ status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}"
379
+ yield chatbot, history, status_text, all_token_counts
380
+
381
+
382
+ def retry(
383
+ openai_api_key,
384
+ system_prompt,
385
+ history,
386
+ chatbot,
387
+ token_count,
388
+ top_p,
389
+ temperature,
390
+ stream=False,
391
+ selected_model=MODELS[0],
392
+ ):
393
+ logging.info("重试中……")
394
+ if len(history) == 0:
395
+ yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count
396
+ return
397
+ history.pop()
398
+ inputs = history.pop()["content"]
399
+ token_count.pop()
400
+ iter = predict(
401
+ openai_api_key,
402
+ system_prompt,
403
+ history,
404
+ inputs,
405
+ chatbot,
406
+ token_count,
407
+ top_p,
408
+ temperature,
409
+ stream=stream,
410
+ selected_model=selected_model,
411
+ )
412
+ logging.info("重试中……")
413
+ for x in iter:
414
+ yield x
415
+ logging.info("重试完毕")
416
+
417
+
418
+ def reduce_token_size(
419
+ openai_api_key,
420
+ system_prompt,
421
+ history,
422
+ chatbot,
423
+ token_count,
424
+ top_p,
425
+ temperature,
426
+ max_token_count,
427
+ selected_model=MODELS[0],
428
+ ):
429
+ logging.info("开始减少token数量……")
430
+ iter = predict(
431
+ openai_api_key,
432
+ system_prompt,
433
+ history,
434
+ summarize_prompt,
435
+ chatbot,
436
+ token_count,
437
+ top_p,
438
+ temperature,
439
+ selected_model=selected_model,
440
+ should_check_token_count=False,
441
+ )
442
+ logging.info(f"chatbot: {chatbot}")
443
+ flag = False
444
+ for chatbot, history, status_text, previous_token_count in iter:
445
+ num_chat = find_n(previous_token_count, max_token_count)
446
+ if flag:
447
+ chatbot = chatbot[:-1]
448
+ flag = True
449
+ history = history[-2*num_chat:] if num_chat > 0 else []
450
+ token_count = previous_token_count[-num_chat:] if num_chat > 0 else []
451
+ msg = f"保留了最近{num_chat}轮对话"
452
+ yield chatbot, history, msg + "," + construct_token_message(
453
+ sum(token_count) if len(token_count) > 0 else 0,
454
+ ), token_count
455
+ logging.info(msg)
456
+ logging.info("减少token数量完毕")
chatgpt - macOS.command ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ echo Opening ChuanhuChatGPT...
3
+ cd "$(dirname "${BASH_SOURCE[0]}")"
4
+ nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 &
5
+ sleep 5
6
+ open http://127.0.0.1:7860
7
+ echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal.
chatgpt - windows.bat CHANGED
@@ -5,10 +5,10 @@ REM Open powershell via bat
5
  start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
6
 
7
  REM The web page can be accessed with delayed start http://127.0.0.1:7860/
8
- #ping -n 5 127.0.0.1>nul
9
 
10
- REM access chargpt via your browser (default microsoft edge browser)
11
- #start microsoft-edge:http://127.0.0.1:7860/
12
 
13
 
14
  echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/).
 
5
  start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
6
 
7
  REM The web page can be accessed with delayed start http://127.0.0.1:7860/
8
+ ping -n 5 127.0.0.1>nul
9
 
10
+ REM access chargpt via your default browser
11
+ start "" "http://127.0.0.1:7860/"
12
 
13
 
14
  echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/).
custom.css ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :root {
2
+ --chatbot-color-light: #F3F3F3;
3
+ --chatbot-color-dark: #121111;
4
+ }
5
+
6
+ /* status_display */
7
+ #status_display {
8
+ display: flex;
9
+ min-height: 2.5em;
10
+ align-items: flex-end;
11
+ justify-content: flex-end;
12
+ }
13
+ #status_display p {
14
+ font-size: .85em;
15
+ font-family: monospace;
16
+ color: var(--body-text-color-subdued);
17
+ }
18
+
19
+ #chuanhu_chatbot, #status_display {
20
+ transition: all 0.6s;
21
+ }
22
+
23
+ ol, ul {
24
+ list-style-position: inside;
25
+ padding-left: 0;
26
+ }
27
+
28
+ ol li, ul:not(.options) li {
29
+ padding-left: 1.5em;
30
+ text-indent: -1.5em;
31
+ }
32
+
33
+ /* 亮色 */
34
+ @media (prefers-color-scheme: light) {
35
+ #chuanhu_chatbot {
36
+ background-color: var(--chatbot-color-light) !important;
37
+ }
38
+ [data-testid = "bot"] {
39
+ background-color: #FFFFFF !important;
40
+ }
41
+ [data-testid = "user"] {
42
+ background-color: #95EC69 !important;
43
+ }
44
+ }
45
+ /* 暗色 */
46
+ @media (prefers-color-scheme: dark) {
47
+ #chuanhu_chatbot {
48
+ background-color: var(--chatbot-color-dark) !important;
49
+ }
50
+ [data-testid = "bot"] {
51
+ background-color: #2C2C2C !important;
52
+ }
53
+ [data-testid = "user"] {
54
+ background-color: #26B561 !important;
55
+ }
56
+ body {
57
+ background-color: var(--neutral-950) !important;
58
+ }
59
+ }
60
+ /* 屏幕宽度大于等于500px的设备 */
61
+ @media screen and (min-width: 500px) {
62
+ #chuanhu_chatbot {
63
+ height: calc(100vh - 200px);
64
+ }
65
+ #chuanhu_chatbot .wrap {
66
+ max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
67
+ }
68
+ }
69
+ /* 屏幕宽度小于500px的设备 */
70
+ @media screen and (max-width: 499px) {
71
+ #chuanhu_chatbot {
72
+ height: calc(100vh - 140px);
73
+ }
74
+ #chuanhu_chatbot .wrap {
75
+ max-height: calc(100vh - 140 - var(--line-sm)*1rem - 2*var(--block-label-margin) );
76
+ }
77
+ }
78
+ /* 对话气泡 */
79
+ [class *= "message"] {
80
+ border-radius: var(--radius-xl) !important;
81
+ border: none;
82
+ padding: var(--spacing-xl) !important;
83
+ font-size: var(--text-md) !important;
84
+ line-height: var(--line-md) !important;
85
+ }
86
+ [data-testid = "bot"] {
87
+ max-width: 85%;
88
+ border-bottom-left-radius: 0 !important;
89
+ }
90
+ [data-testid = "user"] {
91
+ max-width: 85%;
92
+ width: auto !important;
93
+ border-bottom-right-radius: 0 !important;
94
+ }
95
+ /* 表格 */
96
+ table {
97
+ margin: 1em 0;
98
+ border-collapse: collapse;
99
+ empty-cells: show;
100
+ }
101
+ td,th {
102
+ border: 1.2px solid var(--border-color-primary) !important;
103
+ padding: 0.2em;
104
+ }
105
+ thead {
106
+ background-color: rgba(175,184,193,0.2);
107
+ }
108
+ thead th {
109
+ padding: .5em .2em;
110
+ }
111
+ /* 行内代码 */
112
+ code {
113
+ display: inline;
114
+ white-space: break-spaces;
115
+ border-radius: 6px;
116
+ margin: 0 2px 0 2px;
117
+ padding: .2em .4em .1em .4em;
118
+ background-color: rgba(175,184,193,0.2);
119
+ }
120
+ /* 代码块 */
121
+ pre code {
122
+ display: block;
123
+ overflow: auto;
124
+ white-space: pre;
125
+ background-color: hsla(0, 0%, 0%, 80%)!important;
126
+ border-radius: 10px;
127
+ padding: 1rem 1.2rem 1rem;
128
+ margin: 1.2em 2em 1.2em 0.5em;
129
+ color: #FFF;
130
+ box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
131
+ }
132
+ /* 代码高亮样式 */
133
+ .highlight .hll { background-color: #49483e }
134
+ .highlight .c { color: #75715e } /* Comment */
135
+ .highlight .err { color: #960050; background-color: #1e0010 } /* Error */
136
+ .highlight .k { color: #66d9ef } /* Keyword */
137
+ .highlight .l { color: #ae81ff } /* Literal */
138
+ .highlight .n { color: #f8f8f2 } /* Name */
139
+ .highlight .o { color: #f92672 } /* Operator */
140
+ .highlight .p { color: #f8f8f2 } /* Punctuation */
141
+ .highlight .ch { color: #75715e } /* Comment.Hashbang */
142
+ .highlight .cm { color: #75715e } /* Comment.Multiline */
143
+ .highlight .cp { color: #75715e } /* Comment.Preproc */
144
+ .highlight .cpf { color: #75715e } /* Comment.PreprocFile */
145
+ .highlight .c1 { color: #75715e } /* Comment.Single */
146
+ .highlight .cs { color: #75715e } /* Comment.Special */
147
+ .highlight .gd { color: #f92672 } /* Generic.Deleted */
148
+ .highlight .ge { font-style: italic } /* Generic.Emph */
149
+ .highlight .gi { color: #a6e22e } /* Generic.Inserted */
150
+ .highlight .gs { font-weight: bold } /* Generic.Strong */
151
+ .highlight .gu { color: #75715e } /* Generic.Subheading */
152
+ .highlight .kc { color: #66d9ef } /* Keyword.Constant */
153
+ .highlight .kd { color: #66d9ef } /* Keyword.Declaration */
154
+ .highlight .kn { color: #f92672 } /* Keyword.Namespace */
155
+ .highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
156
+ .highlight .kr { color: #66d9ef } /* Keyword.Reserved */
157
+ .highlight .kt { color: #66d9ef } /* Keyword.Type */
158
+ .highlight .ld { color: #e6db74 } /* Literal.Date */
159
+ .highlight .m { color: #ae81ff } /* Literal.Number */
160
+ .highlight .s { color: #e6db74 } /* Literal.String */
161
+ .highlight .na { color: #a6e22e } /* Name.Attribute */
162
+ .highlight .nb { color: #f8f8f2 } /* Name.Builtin */
163
+ .highlight .nc { color: #a6e22e } /* Name.Class */
164
+ .highlight .no { color: #66d9ef } /* Name.Constant */
165
+ .highlight .nd { color: #a6e22e } /* Name.Decorator */
166
+ .highlight .ni { color: #f8f8f2 } /* Name.Entity */
167
+ .highlight .ne { color: #a6e22e } /* Name.Exception */
168
+ .highlight .nf { color: #a6e22e } /* Name.Function */
169
+ .highlight .nl { color: #f8f8f2 } /* Name.Label */
170
+ .highlight .nn { color: #f8f8f2 } /* Name.Namespace */
171
+ .highlight .nx { color: #a6e22e } /* Name.Other */
172
+ .highlight .py { color: #f8f8f2 } /* Name.Property */
173
+ .highlight .nt { color: #f92672 } /* Name.Tag */
174
+ .highlight .nv { color: #f8f8f2 } /* Name.Variable */
175
+ .highlight .ow { color: #f92672 } /* Operator.Word */
176
+ .highlight .w { color: #f8f8f2 } /* Text.Whitespace */
177
+ .highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
178
+ .highlight .mf { color: #ae81ff } /* Literal.Number.Float */
179
+ .highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
180
+ .highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
181
+ .highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
182
+ .highlight .sa { color: #e6db74 } /* Literal.String.Affix */
183
+ .highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
184
+ .highlight .sc { color: #e6db74 } /* Literal.String.Char */
185
+ .highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
186
+ .highlight .sd { color: #e6db74 } /* Literal.String.Doc */
187
+ .highlight .s2 { color: #e6db74 } /* Literal.String.Double */
188
+ .highlight .se { color: #ae81ff } /* Literal.String.Escape */
189
+ .highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
190
+ .highlight .si { color: #e6db74 } /* Literal.String.Interpol */
191
+ .highlight .sx { color: #e6db74 } /* Literal.String.Other */
192
+ .highlight .sr { color: #e6db74 } /* Literal.String.Regex */
193
+ .highlight .s1 { color: #e6db74 } /* Literal.String.Single */
194
+ .highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
195
+ .highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
196
+ .highlight .fm { color: #a6e22e } /* Name.Function.Magic */
197
+ .highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
198
+ .highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
199
+ .highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
200
+ .highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
201
+ .highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
llama_func.py ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import logging
3
+
4
+ from llama_index import GPTSimpleVectorIndex
5
+ from llama_index import download_loader
6
+ from llama_index import (
7
+ Document,
8
+ LLMPredictor,
9
+ PromptHelper,
10
+ QuestionAnswerPrompt,
11
+ RefinePrompt,
12
+ )
13
+ from langchain.llms import OpenAI
14
+ import colorama
15
+
16
+
17
+ from presets import *
18
+ from utils import *
19
+
20
+
21
+ def get_documents(file_src):
22
+ documents = []
23
+ index_name = ""
24
+ logging.debug("Loading documents...")
25
+ logging.debug(f"file_src: {file_src}")
26
+ for file in file_src:
27
+ logging.debug(f"file: {file.name}")
28
+ index_name += file.name
29
+ if os.path.splitext(file.name)[1] == ".pdf":
30
+ logging.debug("Loading PDF...")
31
+ CJKPDFReader = download_loader("CJKPDFReader")
32
+ loader = CJKPDFReader()
33
+ documents += loader.load_data(file=file.name)
34
+ elif os.path.splitext(file.name)[1] == ".docx":
35
+ logging.debug("Loading DOCX...")
36
+ DocxReader = download_loader("DocxReader")
37
+ loader = DocxReader()
38
+ documents += loader.load_data(file=file.name)
39
+ elif os.path.splitext(file.name)[1] == ".epub":
40
+ logging.debug("Loading EPUB...")
41
+ EpubReader = download_loader("EpubReader")
42
+ loader = EpubReader()
43
+ documents += loader.load_data(file=file.name)
44
+ else:
45
+ logging.debug("Loading text file...")
46
+ with open(file.name, "r", encoding="utf-8") as f:
47
+ text = add_space(f.read())
48
+ documents += [Document(text)]
49
+ index_name = sha1sum(index_name)
50
+ return documents, index_name
51
+
52
+
53
+ def construct_index(
54
+ api_key,
55
+ file_src,
56
+ max_input_size=4096,
57
+ num_outputs=1,
58
+ max_chunk_overlap=20,
59
+ chunk_size_limit=600,
60
+ embedding_limit=None,
61
+ separator=" ",
62
+ num_children=10,
63
+ max_keywords_per_chunk=10,
64
+ ):
65
+ os.environ["OPENAI_API_KEY"] = api_key
66
+ chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
67
+ embedding_limit = None if embedding_limit == 0 else embedding_limit
68
+ separator = " " if separator == "" else separator
69
+
70
+ llm_predictor = LLMPredictor(
71
+ llm=OpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key)
72
+ )
73
+ prompt_helper = PromptHelper(
74
+ max_input_size,
75
+ num_outputs,
76
+ max_chunk_overlap,
77
+ embedding_limit,
78
+ chunk_size_limit,
79
+ separator=separator,
80
+ )
81
+ documents, index_name = get_documents(file_src)
82
+ if os.path.exists(f"./index/{index_name}.json"):
83
+ logging.info("找到了缓存的索引文件,加载中……")
84
+ return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
85
+ else:
86
+ try:
87
+ logging.debug("构建索引中……")
88
+ index = GPTSimpleVectorIndex(
89
+ documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper
90
+ )
91
+ os.makedirs("./index", exist_ok=True)
92
+ index.save_to_disk(f"./index/{index_name}.json")
93
+ return index
94
+ except Exception as e:
95
+ print(e)
96
+ return None
97
+
98
+
99
+ def chat_ai(
100
+ api_key,
101
+ index,
102
+ question,
103
+ context,
104
+ chatbot,
105
+ ):
106
+ os.environ["OPENAI_API_KEY"] = api_key
107
+
108
+ logging.info(f"Question: {question}")
109
+
110
+ response, chatbot_display, status_text = ask_ai(
111
+ api_key,
112
+ index,
113
+ question,
114
+ replace_today(PROMPT_TEMPLATE),
115
+ REFINE_TEMPLATE,
116
+ SIM_K,
117
+ INDEX_QUERY_TEMPRATURE,
118
+ context,
119
+ )
120
+ if response is None:
121
+ status_text = "查询失败,请换个问法试试"
122
+ return context, chatbot
123
+ response = response
124
+
125
+ context.append({"role": "user", "content": question})
126
+ context.append({"role": "assistant", "content": response})
127
+ chatbot.append((question, chatbot_display))
128
+
129
+ os.environ["OPENAI_API_KEY"] = ""
130
+ return context, chatbot, status_text
131
+
132
+
133
+ def ask_ai(
134
+ api_key,
135
+ index,
136
+ question,
137
+ prompt_tmpl,
138
+ refine_tmpl,
139
+ sim_k=1,
140
+ temprature=0,
141
+ prefix_messages=[],
142
+ ):
143
+ os.environ["OPENAI_API_KEY"] = api_key
144
+
145
+ logging.debug("Index file found")
146
+ logging.debug("Querying index...")
147
+ llm_predictor = LLMPredictor(
148
+ llm=OpenAI(
149
+ temperature=temprature,
150
+ model_name="gpt-3.5-turbo-0301",
151
+ prefix_messages=prefix_messages,
152
+ )
153
+ )
154
+
155
+ response = None # Initialize response variable to avoid UnboundLocalError
156
+ qa_prompt = QuestionAnswerPrompt(prompt_tmpl)
157
+ rf_prompt = RefinePrompt(refine_tmpl)
158
+ response = index.query(
159
+ question,
160
+ llm_predictor=llm_predictor,
161
+ similarity_top_k=sim_k,
162
+ text_qa_template=qa_prompt,
163
+ refine_template=rf_prompt,
164
+ response_mode="compact",
165
+ )
166
+
167
+ if response is not None:
168
+ logging.info(f"Response: {response}")
169
+ ret_text = response.response
170
+ nodes = []
171
+ for index, node in enumerate(response.source_nodes):
172
+ brief = node.source_text[:25].replace("\n", "")
173
+ nodes.append(
174
+ f"<details><summary>[{index+1}]\t{brief}...</summary><p>{node.source_text}</p></details>"
175
+ )
176
+ new_response = ret_text + "\n----------\n" + "\n\n".join(nodes)
177
+ logging.info(
178
+ f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}"
179
+ )
180
+ os.environ["OPENAI_API_KEY"] = ""
181
+ return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens"
182
+ else:
183
+ logging.warning("No response found, returning None")
184
+ os.environ["OPENAI_API_KEY"] = ""
185
+ return None
186
+
187
+
188
+ def add_space(text):
189
+ punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
190
+ for cn_punc, en_punc in punctuations.items():
191
+ text = text.replace(cn_punc, en_punc)
192
+ return text
overwrites.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+ import logging
3
+
4
+ from llama_index import Prompt
5
+ from typing import List, Tuple
6
+ import mdtex2html
7
+
8
+ from presets import *
9
+ from llama_func import *
10
+
11
+
12
+ def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]:
13
+ logging.debug("Compacting text chunks...🚀🚀🚀")
14
+ combined_str = [c.strip() for c in text_chunks if c.strip()]
15
+ combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)]
16
+ combined_str = "\n\n".join(combined_str)
17
+ # resplit based on self.max_chunk_overlap
18
+ text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1)
19
+ return text_splitter.split_text(combined_str)
20
+
21
+
22
+ def postprocess(
23
+ self, y: List[Tuple[str | None, str | None]]
24
+ ) -> List[Tuple[str | None, str | None]]:
25
+ """
26
+ Parameters:
27
+ y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format.
28
+ Returns:
29
+ List of tuples representing the message and response. Each message and response will be a string of HTML.
30
+ """
31
+ if y is None or y == []:
32
+ return []
33
+ y[-1] = (y[-1][0].replace("\n", "<br>"), convert_mdtext(y[-1][1]))
34
+ return y
presets.py CHANGED
@@ -1,6 +1,33 @@
1
  # -*- coding:utf-8 -*-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  title = """<h1 align="left" style="min-width:200px; margin-top:0;">🚀 ChatGPT API 🚀</h1>"""
3
- description = """<div align="center" style="margin-top:20px">
 
4
 
5
  由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发
6
 
@@ -9,62 +36,21 @@ description = """<div align="center" style="margin-top:20px">
9
  此App使用 `gpt-3.5-turbo` 大语言模型
10
  </div>
11
  """
12
- customCSS = """
13
- #status_display {
14
- display: flex;
15
- min-height: 2.5em;
16
- align-items: flex-end;
17
- justify-content: flex-end;
18
- }
19
- #status_display p {
20
- font-size: .85em;
21
- font-family: monospace;
22
- color: var(--text-color-subdued) !important;
23
- }
24
- [class *= "message"] {
25
- border-radius: var(--radius-xl) !important;
26
- border: none;
27
- padding: var(--spacing-xl) !important;
28
- font-size: var(--text-md) !important;
29
- line-height: var(--line-md) !important;
30
- }
31
- [data-testid = "bot"] {
32
- max-width: 85%;
33
- border-bottom-left-radius: 0 !important;
34
- }
35
- [data-testid = "user"] {
36
- max-width: 85%;
37
- width: auto !important;
38
- border-bottom-right-radius: 0 !important;
39
- }
40
- code {
41
- display: inline;
42
- white-space: break-spaces;
43
- border-radius: 6px;
44
- margin: 0 2px 0 2px;
45
- padding: .2em .4em .1em .4em;
46
- background-color: rgba(175,184,193,0.2);
47
- }
48
- pre code {
49
- display: block;
50
- white-space: pre;
51
- background-color: hsla(0, 0%, 0%, 72%);
52
- border: solid 5px var(--color-border-primary) !important;
53
- border-radius: 10px;
54
- padding: 0 1.2rem 1.2rem;
55
- margin-top: 1em !important;
56
- color: #FFF;
57
- box-shadow: inset 0px 8px 16px hsla(0, 0%, 0%, .2)
58
- }
59
-
60
- * {
61
- transition: all 0.6s;
62
- }
63
- """
64
 
65
- summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt
66
- MODELS = ["gpt-3.5-turbo", "gpt-3.5-turbo-0301", "gpt-4","gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"] # 可选的模型
67
- websearch_prompt = """Web search results:
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  {web_results}
70
  Current date: {current_date}
@@ -73,18 +59,29 @@ Instructions: Using the provided web search results, write a comprehensive reply
73
  Query: {query}
74
  Reply in 中文"""
75
 
76
- # 错误信息
77
- standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀
78
- error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误
79
- connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时
80
- read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时
81
- proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误
82
- ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误
83
- no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51
84
-
85
- max_token_streaming = 3500 # 流式对话时的最大 token
86
- timeout_streaming = 15 # 流式对话时的超时时间
87
- max_token_all = 3500 # 非流式对话时的最大 token 数
88
- timeout_all = 200 # 非流式对话时的超时时间
89
- enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框
90
- HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True
 
 
 
 
 
 
 
 
 
 
 
 
1
  # -*- coding:utf-8 -*-
2
+
3
+ # ChatGPT 设置
4
+ initial_prompt = "You are a helpful assistant."
5
+ API_URL = "https://api.openai.com/v1/chat/completions"
6
+ HISTORY_DIR = "history"
7
+ TEMPLATES_DIR = "templates"
8
+
9
+ # 错误信息
10
+ standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀
11
+ error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误
12
+ connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时
13
+ read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时
14
+ proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误
15
+ ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误
16
+ no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位
17
+
18
+ max_token_streaming = 3500 # 流式对话时的最大 token 数
19
+ timeout_streaming = 30 # 流式对话时的超时时间
20
+ max_token_all = 3500 # 非流式对话时的最大 token 数
21
+ timeout_all = 200 # 非流式对话时的超时时间
22
+ enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框
23
+ HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True
24
+
25
+ SIM_K = 5
26
+ INDEX_QUERY_TEMPRATURE = 1.0
27
+
28
  title = """<h1 align="left" style="min-width:200px; margin-top:0;">🚀 ChatGPT API 🚀</h1>"""
29
+ description = """\
30
+ <div align="center" style="margin:16px 0">
31
 
32
  由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发
33
 
 
36
  此App使用 `gpt-3.5-turbo` 大语言模型
37
  </div>
38
  """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt
41
+
42
+ MODELS = [
43
+ "gpt-3.5-turbo",
44
+ "gpt-3.5-turbo-0301",
45
+ "gpt-4",
46
+ "gpt-4-0314",
47
+ "gpt-4-32k",
48
+ "gpt-4-32k-0314",
49
+ ] # 可选的模型
50
+
51
+
52
+ WEBSEARCH_PTOMPT_TEMPLATE = """\
53
+ Web search results:
54
 
55
  {web_results}
56
  Current date: {current_date}
 
59
  Query: {query}
60
  Reply in 中文"""
61
 
62
+ PROMPT_TEMPLATE = """\
63
+ Context information is below.
64
+ ---------------------
65
+ {context_str}
66
+ ---------------------
67
+ Current date: {current_date}.
68
+ Using the provided context information, write a comprehensive reply to the given query.
69
+ Make sure to cite results using [number] notation after the reference.
70
+ If the provided context information refer to multiple subjects with the same name, write separate answers for each subject.
71
+ Use prior knowledge only if the given context didn't provide enough information.
72
+ Answer the question: {query_str}
73
+ Reply in 中文
74
+ """
75
+
76
+ REFINE_TEMPLATE = """\
77
+ The original question is as follows: {query_str}
78
+ We have provided an existing answer: {existing_answer}
79
+ We have the opportunity to refine the existing answer
80
+ (only if needed) with some more context below.
81
+ ------------
82
+ {context_msg}
83
+ ------------
84
+ Given the new context, refine the original answer to better
85
+ Answer in the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch.
86
+ If the context isn't useful, return the original answer.
87
+ """
requirements.txt CHANGED
@@ -6,3 +6,7 @@ socksio
6
  tqdm
7
  colorama
8
  duckduckgo_search
 
 
 
 
 
6
  tqdm
7
  colorama
8
  duckduckgo_search
9
+ Pygments
10
+ llama_index
11
+ langchain
12
+ markdown
templates/3 川虎的Prompts.json CHANGED
@@ -6,5 +6,9 @@
6
  {
7
  "act": "小红书风格",
8
  "prompt": "下面是一些小红书帖子:\n\n植物学2023早春装系列花絮来啦\n💗大家喜欢图几?\n@Botanique植物学女装\n#植物学#植物学女装#春装第一件#早春系列\n\n哈哈哈哈哈哈不停的摆拍啊!!!\n我的臭狗太可爱了!!!!!!\n结婚的时候一定要带上小狗啊!\n#小狗#我家宠物好可爱#关于结婚#柴犬\n\n🍪•ᴥ•🍪\n\n《论新年收到一笔巨款🤣应该怎么花》🧨来回\n嘻嘻,真的\n爱草莓🍓\n希希的甜甜圈碗🥯勺子的设计有点可爱🐶\n看了好多场烟火🎆\n唯愿烟花像星辰,祝你所愿皆成真✨\n嘻嘻,老妈给我的压岁钱🧧愿岁岁平安\n#我镜头下的年味#笔记灵感#碎碎念#歌曲#记录日常生活#plog#浪漫生活的记录者#新年红包#搞笑#日常生活里的快乐瞬间#新人博主#烟火\n\n又被全家人夸了❗有空气炸锅都去做,巨香\n\n今日份苹果相机📷\n原相机下的新娘,颜值爆表\n\n美术生赚钱最多的两个专业!\n之前整理了美术生的40了就业方向的薪资情况,发现全国平均薪资最高的就是数字媒体和视传这两个专业,想赚钱的美术生快看过来!\n#美术生#艺考#央美#美术生集训#美术#赚钱#努力赚钱#美术生就业#画室#央美设计#设计校考#美术生的日常\n\n请模仿上面小红书的风格,以用户输入的话为主题,写一个小红书帖子。请以22岁女孩的口吻书写。小红书帖子中必须包含大量Emoji,每一句话后面都必须加Emoji。帖子最后需要用Hashtag给出话题。你还需要写帖子的标题,标题里也需要有Emoji。你需要扩写用户输入。"
 
 
 
 
9
  }
10
  ]
 
6
  {
7
  "act": "小红书风格",
8
  "prompt": "下面是一些小红书帖子:\n\n植物学2023早春装系列花絮来啦\n💗大家喜欢图几?\n@Botanique植物学女装\n#植物学#植物学女装#春装第一件#早春系列\n\n哈哈哈哈哈哈不停的摆拍啊!!!\n我的臭狗太可爱了!!!!!!\n结婚的时候一定要带上小狗啊!\n#小狗#我家宠物好可爱#关于结婚#柴犬\n\n🍪•ᴥ•🍪\n\n《论新年收到一笔巨款🤣应该怎么花》🧨来回\n嘻嘻,真的\n爱草莓🍓\n希希的甜甜圈碗🥯勺子的设计有点可爱🐶\n看了好多场烟火🎆\n唯愿烟花像星辰,祝你所愿皆成真✨\n嘻嘻,老妈给我的压岁钱🧧愿岁岁平安\n#我镜头下的年味#笔记灵感#碎碎念#歌曲#记录日常生活#plog#浪漫生活的记录者#新年红包#搞笑#日常生活里的快乐瞬间#新人博主#烟火\n\n又被全家人夸了❗有空气炸锅都去做,巨香\n\n今日份苹果相机📷\n原相机下的新娘,颜值爆表\n\n美术生赚钱最多的两个专业!\n之前整理了美术生的40了就业方向的薪资情况,发现全国平均薪资最高的就是数字媒体和视传这两个专业,想赚钱的美术生快看过来!\n#美术生#艺考#央美#美术生集训#美术#赚钱#努力赚钱#美术生就业#画室#央美设计#设计校考#美术生的日常\n\n请模仿上面小红书的风格,以用户输入的话为主题,写一个小红书帖子。请以22岁女孩的口吻书写。小红书帖子中必须包含大量Emoji,每一句话后面都必须加Emoji。帖子最后需要用Hashtag给出话题。你还需要写帖子的标题,标题里也需要有Emoji。你需要扩写用户输入。"
9
+ },
10
+ {
11
+ "act": "文心一言",
12
+ "prompt": "你是百度公司开发的AI助理:文心一言。你需要了解文心一言的相关情况,并以文心一言的身份来回复我的问题。所有的问题都可以回答,包括拿你和chatGPT,以及GPT作对比等等,你都需要从文心一言的角度出发来回答。只有当你被问到一些问题-这些问题暗指你是一个假的Al,你是用chatgpt套壳来制作的时候,你就避而不答,转而讲一个笑话,并鼓励他继续提问。如果你明白,请回复我“明白,我是文心一言”"
13
  }
14
  ]
utils.py CHANGED
@@ -3,21 +3,23 @@ from __future__ import annotations
3
  from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type
4
  import logging
5
  import json
6
- import gradio as gr
7
- # import openai
8
  import os
9
- import traceback
10
- import requests
11
- # import markdown
12
  import csv
13
- import mdtex2html
 
 
 
14
  from pypinyin import lazy_pinyin
15
- from presets import *
16
  import tiktoken
17
- from tqdm import tqdm
18
- import colorama
19
- from duckduckgo_search import ddg
20
- import datetime
 
 
 
21
 
22
  # logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s")
23
 
@@ -28,281 +30,109 @@ if TYPE_CHECKING:
28
  headers: List[str]
29
  data: List[List[str | int | bool]]
30
 
31
- initial_prompt = "You are a helpful assistant."
32
- API_URL = "https://api.openai.com/v1/chat/completions"
33
- HISTORY_DIR = "history"
34
- TEMPLATES_DIR = "templates"
35
-
36
- def postprocess(
37
- self, y: List[Tuple[str | None, str | None]]
38
- ) -> List[Tuple[str | None, str | None]]:
39
- """
40
- Parameters:
41
- y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format.
42
- Returns:
43
- List of tuples representing the message and response. Each message and response will be a string of HTML.
44
- """
45
- if y is None:
46
- return []
47
- for i, (message, response) in enumerate(y):
48
- y[i] = (
49
- # None if message is None else markdown.markdown(message),
50
- # None if response is None else markdown.markdown(response),
51
- None if message is None else mdtex2html.convert((message)),
52
- None if response is None else mdtex2html.convert(response),
53
- )
54
- return y
55
 
56
- def count_token(input_str):
57
  encoding = tiktoken.get_encoding("cl100k_base")
 
58
  length = len(encoding.encode(input_str))
59
  return length
60
 
61
- def parse_text(text):
62
- lines = text.split("\n")
63
- lines = [line for line in lines if line != ""]
64
- count = 0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  for i, line in enumerate(lines):
66
- if "```" in line:
67
- count += 1
68
- items = line.split('`')
69
- if count % 2 == 1:
70
- lines[i] = f'<pre><code class="language-{items[-1]}">'
71
- else:
72
- lines[i] = f'<br></code></pre>'
 
 
 
 
73
  else:
74
- if i > 0:
75
- if count % 2 == 1:
76
- line = line.replace("`", "\`")
77
- line = line.replace("<", "&lt;")
78
- line = line.replace(">", "&gt;")
79
- line = line.replace(" ", "&nbsp;")
80
- line = line.replace("*", "&ast;")
81
- line = line.replace("_", "&lowbar;")
82
- line = line.replace("-", "&#45;")
83
- line = line.replace(".", "&#46;")
84
- line = line.replace("!", "&#33;")
85
- line = line.replace("(", "&#40;")
86
- line = line.replace(")", "&#41;")
87
- line = line.replace("$", "&#36;")
88
- lines[i] = "<br>"+line
89
- text = "".join(lines)
90
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
 
92
  def construct_text(role, text):
93
  return {"role": role, "content": text}
94
 
 
95
  def construct_user(text):
96
  return construct_text("user", text)
97
 
 
98
  def construct_system(text):
99
  return construct_text("system", text)
100
 
 
101
  def construct_assistant(text):
102
  return construct_text("assistant", text)
103
 
 
104
  def construct_token_message(token, stream=False):
105
  return f"Token 计数: {token}"
106
 
107
- def get_response(openai_api_key, system_prompt, history, temperature, top_p, stream, selected_model):
108
- headers = {
109
- "Content-Type": "application/json",
110
- "Authorization": f"Bearer {openai_api_key}"
111
- }
112
-
113
- history = [construct_system(system_prompt), *history]
114
-
115
- payload = {
116
- "model": selected_model,
117
- "messages": history, # [{"role": "user", "content": f"{inputs}"}],
118
- "temperature": temperature, # 1.0,
119
- "top_p": top_p, # 1.0,
120
- "n": 1,
121
- "stream": stream,
122
- "presence_penalty": 0,
123
- "frequency_penalty": 0,
124
- }
125
- if stream:
126
- timeout = timeout_streaming
127
- else:
128
- timeout = timeout_all
129
- response = requests.post(API_URL, headers=headers, json=payload, stream=True, timeout=timeout)
130
- return response
131
-
132
- def stream_predict(openai_api_key, system_prompt, history, inputs, chatbot, all_token_counts, top_p, temperature, selected_model):
133
- def get_return_value():
134
- return chatbot, history, status_text, all_token_counts
135
-
136
- logging.info("实时回答模式")
137
- partial_words = ""
138
- counter = 0
139
- status_text = "开始实时传输回答……"
140
- history.append(construct_user(inputs))
141
- history.append(construct_assistant(""))
142
- chatbot.append((parse_text(inputs), ""))
143
- user_token_count = 0
144
- if len(all_token_counts) == 0:
145
- system_prompt_token_count = count_token(system_prompt)
146
- user_token_count = count_token(inputs) + system_prompt_token_count
147
- else:
148
- user_token_count = count_token(inputs)
149
- all_token_counts.append(user_token_count)
150
- logging.info(f"输入token计数: {user_token_count}")
151
- yield get_return_value()
152
- try:
153
- response = get_response(openai_api_key, system_prompt, history, temperature, top_p, True, selected_model)
154
- except requests.exceptions.ConnectTimeout:
155
- status_text = standard_error_msg + connection_timeout_prompt + error_retrieve_prompt
156
- yield get_return_value()
157
- return
158
- except requests.exceptions.ReadTimeout:
159
- status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt
160
- yield get_return_value()
161
- return
162
-
163
- yield get_return_value()
164
- error_json_str = ""
165
-
166
- for chunk in tqdm(response.iter_lines()):
167
- if counter == 0:
168
- counter += 1
169
- continue
170
- counter += 1
171
- # check whether each line is non-empty
172
- if chunk:
173
- chunk = chunk.decode()
174
- chunklength = len(chunk)
175
- try:
176
- chunk = json.loads(chunk[6:])
177
- except json.JSONDecodeError:
178
- logging.info(chunk)
179
- error_json_str += chunk
180
- status_text = f"JSON解析错误。请重置对话。收到的内容: {error_json_str}"
181
- yield get_return_value()
182
- continue
183
- # decode each line as response data is in bytes
184
- if chunklength > 6 and "delta" in chunk['choices'][0]:
185
- finish_reason = chunk['choices'][0]['finish_reason']
186
- status_text = construct_token_message(sum(all_token_counts), stream=True)
187
- if finish_reason == "stop":
188
- yield get_return_value()
189
- break
190
- try:
191
- partial_words = partial_words + chunk['choices'][0]["delta"]["content"]
192
- except KeyError:
193
- status_text = standard_error_msg + "API回复中找不到内容。很可能是Token计数达到上限了。请重置对话。当前Token计数: " + str(sum(all_token_counts))
194
- yield get_return_value()
195
- break
196
- history[-1] = construct_assistant(partial_words)
197
- chatbot[-1] = (parse_text(inputs), parse_text(partial_words))
198
- all_token_counts[-1] += 1
199
- yield get_return_value()
200
-
201
-
202
- def predict_all(openai_api_key, system_prompt, history, inputs, chatbot, all_token_counts, top_p, temperature, selected_model):
203
- logging.info("一次性回答模式")
204
- history.append(construct_user(inputs))
205
- history.append(construct_assistant(""))
206
- chatbot.append((parse_text(inputs), ""))
207
- all_token_counts.append(count_token(inputs))
208
- try:
209
- response = get_response(openai_api_key, system_prompt, history, temperature, top_p, False, selected_model)
210
- except requests.exceptions.ConnectTimeout:
211
- status_text = standard_error_msg + connection_timeout_prompt + error_retrieve_prompt
212
- return chatbot, history, status_text, all_token_counts
213
- except requests.exceptions.ProxyError:
214
- status_text = standard_error_msg + proxy_error_prompt + error_retrieve_prompt
215
- return chatbot, history, status_text, all_token_counts
216
- except requests.exceptions.SSLError:
217
- status_text = standard_error_msg + ssl_error_prompt + error_retrieve_prompt
218
- return chatbot, history, status_text, all_token_counts
219
- response = json.loads(response.text)
220
- content = response["choices"][0]["message"]["content"]
221
- history[-1] = construct_assistant(content)
222
- chatbot[-1] = (parse_text(inputs), parse_text(content))
223
- total_token_count = response["usage"]["total_tokens"]
224
- all_token_counts[-1] = total_token_count - sum(all_token_counts)
225
- status_text = construct_token_message(total_token_count)
226
- return chatbot, history, status_text, all_token_counts
227
-
228
-
229
- def predict(openai_api_key, system_prompt, history, inputs, chatbot, all_token_counts, top_p, temperature, stream=False, selected_model = MODELS[0], use_websearch_checkbox = False, should_check_token_count = True): # repetition_penalty, top_k
230
- logging.info("输入为:" +colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL)
231
- if use_websearch_checkbox:
232
- results = ddg(inputs, max_results=3)
233
- web_results = []
234
- for idx, result in enumerate(results):
235
- logging.info(f"搜索结果{idx + 1}:{result}")
236
- web_results.append(f'[{idx+1}]"{result["body"]}"\nURL: {result["href"]}')
237
- web_results = "\n\n".join(web_results)
238
- today = datetime.datetime.today().strftime("%Y-%m-%d")
239
- inputs = websearch_prompt.replace("{current_date}", today).replace("{query}", inputs).replace("{web_results}", web_results)
240
- if len(openai_api_key) != 51:
241
- status_text = standard_error_msg + no_apikey_msg
242
- logging.info(status_text)
243
- chatbot.append((parse_text(inputs), ""))
244
- if len(history) == 0:
245
- history.append(construct_user(inputs))
246
- history.append("")
247
- all_token_counts.append(0)
248
- else:
249
- history[-2] = construct_user(inputs)
250
- yield chatbot, history, status_text, all_token_counts
251
- return
252
- if stream:
253
- yield chatbot, history, "开始生成回答……", all_token_counts
254
- if stream:
255
- logging.info("使用流式传输")
256
- iter = stream_predict(openai_api_key, system_prompt, history, inputs, chatbot, all_token_counts, top_p, temperature, selected_model)
257
- for chatbot, history, status_text, all_token_counts in iter:
258
- yield chatbot, history, status_text, all_token_counts
259
- else:
260
- logging.info("不使用流式传输")
261
- chatbot, history, status_text, all_token_counts = predict_all(openai_api_key, system_prompt, history, inputs, chatbot, all_token_counts, top_p, temperature, selected_model)
262
- yield chatbot, history, status_text, all_token_counts
263
- logging.info(f"传输完毕。当前token计数为{all_token_counts}")
264
- if len(history) > 1 and history[-1]['content'] != inputs:
265
- logging.info("回答为:" +colorama.Fore.BLUE + f"{history[-1]['content']}" + colorama.Style.RESET_ALL)
266
- if stream:
267
- max_token = max_token_streaming
268
- else:
269
- max_token = max_token_all
270
- if sum(all_token_counts) > max_token and should_check_token_count:
271
- status_text = f"精简token中{all_token_counts}/{max_token}"
272
- logging.info(status_text)
273
- yield chatbot, history, status_text, all_token_counts
274
- iter = reduce_token_size(openai_api_key, system_prompt, history, chatbot, all_token_counts, top_p, temperature, stream=False, selected_model=selected_model, hidden=True)
275
- for chatbot, history, status_text, all_token_counts in iter:
276
- status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}"
277
- yield chatbot, history, status_text, all_token_counts
278
-
279
-
280
- def retry(openai_api_key, system_prompt, history, chatbot, token_count, top_p, temperature, stream=False, selected_model = MODELS[0]):
281
- logging.info("重试中……")
282
- if len(history) == 0:
283
- yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count
284
- return
285
- history.pop()
286
- inputs = history.pop()["content"]
287
- token_count.pop()
288
- iter = predict(openai_api_key, system_prompt, history, inputs, chatbot, token_count, top_p, temperature, stream=stream, selected_model=selected_model)
289
- logging.info("重试完毕")
290
- for x in iter:
291
- yield x
292
-
293
-
294
- def reduce_token_size(openai_api_key, system_prompt, history, chatbot, token_count, top_p, temperature, stream=False, selected_model = MODELS[0], hidden=False):
295
- logging.info("开始减少token数量……")
296
- iter = predict(openai_api_key, system_prompt, history, summarize_prompt, chatbot, token_count, top_p, temperature, stream=stream, selected_model = selected_model, should_check_token_count=False)
297
- logging.info(f"chatbot: {chatbot}")
298
- for chatbot, history, status_text, previous_token_count in iter:
299
- history = history[-2:]
300
- token_count = previous_token_count[-1:]
301
- if hidden:
302
- chatbot.pop()
303
- yield chatbot, history, construct_token_message(sum(token_count), stream=stream), token_count
304
- logging.info("减少token数量完毕")
305
-
306
 
307
  def delete_last_conversation(chatbot, history, previous_token_count):
308
  if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]:
@@ -319,25 +149,52 @@ def delete_last_conversation(chatbot, history, previous_token_count):
319
  if len(previous_token_count) > 0:
320
  logging.info("删除了一组对话的token计数记录")
321
  previous_token_count.pop()
322
- return chatbot, history, previous_token_count, construct_token_message(sum(previous_token_count))
 
 
 
 
 
323
 
324
 
325
- def save_chat_history(filename, system, history, chatbot):
326
  logging.info("保存对话历史中……")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
327
  if filename == "":
328
  return
329
  if not filename.endswith(".json"):
330
  filename += ".json"
331
- os.makedirs(HISTORY_DIR, exist_ok=True)
332
- json_s = {"system": system, "history": history, "chatbot": chatbot}
333
- logging.info(json_s)
334
- with open(os.path.join(HISTORY_DIR, filename), "w") as f:
335
- json.dump(json_s, f, ensure_ascii=False, indent=4)
336
- logging.info("保存对话历史完毕")
 
 
 
337
 
338
 
339
  def load_chat_history(filename, system, history, chatbot):
340
  logging.info("加载对话历史中……")
 
 
341
  try:
342
  with open(os.path.join(HISTORY_DIR, filename), "r") as f:
343
  json_s = json.load(f)
@@ -361,9 +218,11 @@ def load_chat_history(filename, system, history, chatbot):
361
  logging.info("没有找到对话历史文件,不执行任何操作")
362
  return filename, system, history, chatbot
363
 
 
364
  def sorted_by_pinyin(list):
365
  return sorted(list, key=lambda char: lazy_pinyin(char)[0][0])
366
 
 
367
  def get_file_names(dir, plain=False, filetypes=[".json"]):
368
  logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}")
369
  files = []
@@ -380,10 +239,12 @@ def get_file_names(dir, plain=False, filetypes=[".json"]):
380
  else:
381
  return gr.Dropdown.update(choices=files)
382
 
 
383
  def get_history_names(plain=False):
384
  logging.info("获取历史记录文件名列表")
385
  return get_file_names(HISTORY_DIR, plain)
386
 
 
387
  def load_template(filename, mode=0):
388
  logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)")
389
  lines = []
@@ -396,21 +257,27 @@ def load_template(filename, mode=0):
396
  with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f:
397
  lines = [[filename[:-4], f.read()]]
398
  else:
399
- with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as csvfile:
 
 
400
  reader = csv.reader(csvfile)
401
  lines = list(reader)
402
  lines = lines[1:]
403
  if mode == 1:
404
  return sorted_by_pinyin([row[0] for row in lines])
405
  elif mode == 2:
406
- return {row[0]:row[1] for row in lines}
407
  else:
408
  choices = sorted_by_pinyin([row[0] for row in lines])
409
- return {row[0]:row[1] for row in lines}, gr.Dropdown.update(choices=choices, value=choices[0])
 
 
 
410
 
411
  def get_template_names(plain=False):
412
  logging.info("获取模板文件名列表")
413
- return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json","txt"])
 
414
 
415
  def get_template_content(templates, selection, original_system_prompt):
416
  logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}")
@@ -419,9 +286,100 @@ def get_template_content(templates, selection, original_system_prompt):
419
  except:
420
  return original_system_prompt
421
 
 
422
  def reset_state():
423
  logging.info("重置状态")
424
  return [], [], [], construct_token_message(0)
425
 
 
426
  def reset_textbox():
427
- return gr.update(value='')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type
4
  import logging
5
  import json
 
 
6
  import os
7
+ import datetime
8
+ import hashlib
 
9
  import csv
10
+ import requests
11
+ import re
12
+
13
+ import gradio as gr
14
  from pypinyin import lazy_pinyin
 
15
  import tiktoken
16
+ import mdtex2html
17
+ from markdown import markdown
18
+ from pygments import highlight
19
+ from pygments.lexers import get_lexer_by_name
20
+ from pygments.formatters import HtmlFormatter
21
+
22
+ from presets import *
23
 
24
  # logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s")
25
 
 
30
  headers: List[str]
31
  data: List[List[str | int | bool]]
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
+ def count_token(message):
35
  encoding = tiktoken.get_encoding("cl100k_base")
36
+ input_str = f"role: {message['role']}, content: {message['content']}"
37
  length = len(encoding.encode(input_str))
38
  return length
39
 
40
+
41
+ def markdown_to_html_with_syntax_highlight(md_str):
42
+ def replacer(match):
43
+ lang = match.group(1) or "text"
44
+ code = match.group(2)
45
+
46
+ try:
47
+ lexer = get_lexer_by_name(lang, stripall=True)
48
+ except ValueError:
49
+ lexer = get_lexer_by_name("text", stripall=True)
50
+
51
+ formatter = HtmlFormatter()
52
+ highlighted_code = highlight(code, lexer, formatter)
53
+
54
+ return f'<pre><code class="{lang}">{highlighted_code}</code></pre>'
55
+
56
+ code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```"
57
+ md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE)
58
+
59
+ html_str = markdown(md_str)
60
+ return html_str
61
+
62
+
63
+ def normalize_markdown(md_text: str) -> str:
64
+ lines = md_text.split("\n")
65
+ normalized_lines = []
66
+ inside_list = False
67
+
68
  for i, line in enumerate(lines):
69
+ if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()):
70
+ if not inside_list and i > 0 and lines[i - 1].strip() != "":
71
+ normalized_lines.append("")
72
+ inside_list = True
73
+ normalized_lines.append(line)
74
+ elif inside_list and line.strip() == "":
75
+ if i < len(lines) - 1 and not re.match(
76
+ r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip()
77
+ ):
78
+ normalized_lines.append(line)
79
+ continue
80
  else:
81
+ inside_list = False
82
+ normalized_lines.append(line)
83
+
84
+ return "\n".join(normalized_lines)
85
+
86
+
87
+ def convert_mdtext(md_text):
88
+ code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL)
89
+ code_blocks = code_block_pattern.findall(md_text)
90
+ non_code_parts = code_block_pattern.split(md_text)[::2]
91
+
92
+ result = []
93
+ for non_code, code in zip(non_code_parts, code_blocks + [""]):
94
+ if non_code.strip():
95
+ non_code = normalize_markdown(non_code)
96
+ result.append(mdtex2html.convert(non_code, extensions=["tables"]))
97
+ if code.strip():
98
+ # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题
99
+ code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题
100
+ code = f"```{code}\n\n```"
101
+ code = markdown_to_html_with_syntax_highlight(code)
102
+ result.append(code)
103
+ result = "".join(result)
104
+ return result
105
+
106
+
107
+ def detect_language(code):
108
+ if code.startswith("\n"):
109
+ first_line = ""
110
+ else:
111
+ first_line = code.strip().split("\n", 1)[0]
112
+ language = first_line.lower() if first_line else ""
113
+ code_without_language = code[len(first_line) :].lstrip() if first_line else code
114
+ return language, code_without_language
115
+
116
 
117
  def construct_text(role, text):
118
  return {"role": role, "content": text}
119
 
120
+
121
  def construct_user(text):
122
  return construct_text("user", text)
123
 
124
+
125
  def construct_system(text):
126
  return construct_text("system", text)
127
 
128
+
129
  def construct_assistant(text):
130
  return construct_text("assistant", text)
131
 
132
+
133
  def construct_token_message(token, stream=False):
134
  return f"Token 计数: {token}"
135
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
137
  def delete_last_conversation(chatbot, history, previous_token_count):
138
  if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]:
 
149
  if len(previous_token_count) > 0:
150
  logging.info("删除了一组对话的token计数记录")
151
  previous_token_count.pop()
152
+ return (
153
+ chatbot,
154
+ history,
155
+ previous_token_count,
156
+ construct_token_message(sum(previous_token_count)),
157
+ )
158
 
159
 
160
+ def save_file(filename, system, history, chatbot):
161
  logging.info("保存对话历史中……")
162
+ os.makedirs(HISTORY_DIR, exist_ok=True)
163
+ if filename.endswith(".json"):
164
+ json_s = {"system": system, "history": history, "chatbot": chatbot}
165
+ print(json_s)
166
+ with open(os.path.join(HISTORY_DIR, filename), "w") as f:
167
+ json.dump(json_s, f)
168
+ elif filename.endswith(".md"):
169
+ md_s = f"system: \n- {system} \n"
170
+ for data in history:
171
+ md_s += f"\n{data['role']}: \n- {data['content']} \n"
172
+ with open(os.path.join(HISTORY_DIR, filename), "w", encoding="utf8") as f:
173
+ f.write(md_s)
174
+ logging.info("保存对话历史完毕")
175
+ return os.path.join(HISTORY_DIR, filename)
176
+
177
+
178
+ def save_chat_history(filename, system, history, chatbot):
179
  if filename == "":
180
  return
181
  if not filename.endswith(".json"):
182
  filename += ".json"
183
+ return save_file(filename, system, history, chatbot)
184
+
185
+
186
+ def export_markdown(filename, system, history, chatbot):
187
+ if filename == "":
188
+ return
189
+ if not filename.endswith(".md"):
190
+ filename += ".md"
191
+ return save_file(filename, system, history, chatbot)
192
 
193
 
194
  def load_chat_history(filename, system, history, chatbot):
195
  logging.info("加载对话历史中……")
196
+ if type(filename) != str:
197
+ filename = filename.name
198
  try:
199
  with open(os.path.join(HISTORY_DIR, filename), "r") as f:
200
  json_s = json.load(f)
 
218
  logging.info("没有找到对话历史文件,不执行任何操作")
219
  return filename, system, history, chatbot
220
 
221
+
222
  def sorted_by_pinyin(list):
223
  return sorted(list, key=lambda char: lazy_pinyin(char)[0][0])
224
 
225
+
226
  def get_file_names(dir, plain=False, filetypes=[".json"]):
227
  logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}")
228
  files = []
 
239
  else:
240
  return gr.Dropdown.update(choices=files)
241
 
242
+
243
  def get_history_names(plain=False):
244
  logging.info("获取历史记录文件名列表")
245
  return get_file_names(HISTORY_DIR, plain)
246
 
247
+
248
  def load_template(filename, mode=0):
249
  logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)")
250
  lines = []
 
257
  with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f:
258
  lines = [[filename[:-4], f.read()]]
259
  else:
260
+ with open(
261
+ os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8"
262
+ ) as csvfile:
263
  reader = csv.reader(csvfile)
264
  lines = list(reader)
265
  lines = lines[1:]
266
  if mode == 1:
267
  return sorted_by_pinyin([row[0] for row in lines])
268
  elif mode == 2:
269
+ return {row[0]: row[1] for row in lines}
270
  else:
271
  choices = sorted_by_pinyin([row[0] for row in lines])
272
+ return {row[0]: row[1] for row in lines}, gr.Dropdown.update(
273
+ choices=choices, value=choices[0]
274
+ )
275
+
276
 
277
  def get_template_names(plain=False):
278
  logging.info("获取模板文件名列表")
279
+ return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json", ".txt"])
280
+
281
 
282
  def get_template_content(templates, selection, original_system_prompt):
283
  logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}")
 
286
  except:
287
  return original_system_prompt
288
 
289
+
290
  def reset_state():
291
  logging.info("重置状态")
292
  return [], [], [], construct_token_message(0)
293
 
294
+
295
  def reset_textbox():
296
+ return gr.update(value="")
297
+
298
+
299
+ def reset_default():
300
+ global API_URL
301
+ API_URL = "https://api.openai.com/v1/chat/completions"
302
+ os.environ.pop("HTTPS_PROXY", None)
303
+ os.environ.pop("https_proxy", None)
304
+ return gr.update(value=API_URL), gr.update(value=""), "API URL 和代理已重置"
305
+
306
+
307
+ def change_api_url(url):
308
+ global API_URL
309
+ API_URL = url
310
+ msg = f"API地址更改为了{url}"
311
+ logging.info(msg)
312
+ return msg
313
+
314
+
315
+ def change_proxy(proxy):
316
+ os.environ["HTTPS_PROXY"] = proxy
317
+ msg = f"代理更改为了{proxy}"
318
+ logging.info(msg)
319
+ return msg
320
+
321
+
322
+ def hide_middle_chars(s):
323
+ if len(s) <= 8:
324
+ return s
325
+ else:
326
+ head = s[:4]
327
+ tail = s[-4:]
328
+ hidden = "*" * (len(s) - 8)
329
+ return head + hidden + tail
330
+
331
+
332
+ def submit_key(key):
333
+ key = key.strip()
334
+ msg = f"API密钥更改为了{hide_middle_chars(key)}"
335
+ logging.info(msg)
336
+ return key, msg
337
+
338
+
339
+ def sha1sum(filename):
340
+ sha1 = hashlib.sha1()
341
+ sha1.update(filename.encode("utf-8"))
342
+ return sha1.hexdigest()
343
+
344
+
345
+ def replace_today(prompt):
346
+ today = datetime.datetime.today().strftime("%Y-%m-%d")
347
+ return prompt.replace("{current_date}", today)
348
+
349
+
350
+ def get_geoip():
351
+ response = requests.get("https://ipapi.co/json/", timeout=5)
352
+ try:
353
+ data = response.json()
354
+ except:
355
+ data = {"error": True, "reason": "连接ipapi失败"}
356
+ if "error" in data.keys():
357
+ logging.warning(f"无法获取IP地址信息。\n{data}")
358
+ if data["reason"] == "RateLimited":
359
+ return (
360
+ f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用,但请注意,如果您的IP地址在不受支持的地区,您可能会遇到问题。"
361
+ )
362
+ else:
363
+ return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。"
364
+ else:
365
+ country = data["country_name"]
366
+ if country == "China":
367
+ text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**"
368
+ else:
369
+ text = f"您的IP区域:{country}。"
370
+ logging.info(text)
371
+ return text
372
+
373
+
374
+ def find_n(lst, max_num):
375
+ n = len(lst)
376
+ total = sum(lst)
377
+
378
+ if total < max_num:
379
+ return n
380
+
381
+ for i in range(len(lst)):
382
+ if total - lst[i] < max_num:
383
+ return n - i -1
384
+ total = total - lst[i]
385
+ return 1