qingxu99 commited on
Commit
3da12b5
1 Parent(s): 12710ff

readme translation

Browse files
crazy_functions/crazy_functions_test.py CHANGED
@@ -111,7 +111,7 @@ def test_Markdown多语言():
111
  from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
112
  txt = "README.md"
113
  history = []
114
- for lang in ["English", "Spanish", "French", "German", "Italian", "Chinese", "Japanese", "Korean", "Portuguese", "Russian", "Arabic"]:
115
  plugin_kwargs = {"advanced_arg": lang}
116
  for cookies, cb, hist, msg in Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
117
  print(cb)
 
111
  from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
112
  txt = "README.md"
113
  history = []
114
+ for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
115
  plugin_kwargs = {"advanced_arg": lang}
116
  for cookies, cb, hist, msg in Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
117
  print(cb)
crazy_functions/批量Markdown翻译.py CHANGED
@@ -39,11 +39,11 @@ class PaperFileGroup():
39
  for r, k in zip(self.sp_file_result, self.sp_file_index):
40
  self.file_result[k] += r
41
 
42
- def write_result(self):
43
  manifest = []
44
  for path, res in zip(self.file_paths, self.file_result):
45
- with open(path + f'.{gen_time_str()}.trans.md', 'w', encoding='utf8') as f:
46
- manifest.append(path + '.trans.md')
47
  f.write(res)
48
  return manifest
49
 
@@ -97,7 +97,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
97
  for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
98
  pfg.sp_file_result.append(gpt_say)
99
  pfg.merge_result()
100
- pfg.write_result()
101
  except:
102
  print(trimmed_format_exc())
103
 
 
39
  for r, k in zip(self.sp_file_result, self.sp_file_index):
40
  self.file_result[k] += r
41
 
42
+ def write_result(self, language):
43
  manifest = []
44
  for path, res in zip(self.file_paths, self.file_result):
45
+ with open(path + f'.{gen_time_str()}.{language}.md', 'w', encoding='utf8') as f:
46
+ manifest.append(path + f'.{gen_time_str()}.{language}.md')
47
  f.write(res)
48
  return manifest
49
 
 
97
  for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
98
  pfg.sp_file_result.append(gpt_say)
99
  pfg.merge_result()
100
+ pfg.write_result(language)
101
  except:
102
  print(trimmed_format_exc())
103
 
docs/README.md.German.md ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ > **Hinweis**
2
+ >
3
+ > Bei der Installation von Abhängigkeiten sollten nur die in **requirements.txt** **angegebenen Versionen** streng ausgewählt werden.
4
+ >
5
+ > `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
6
+
7
+ # <img src="docs/logo.png" width="40" > GPT Akademisch optimiert (GPT Academic)
8
+
9
+ **Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Stern; wenn Sie bessere Tastenkombinationen oder Funktions-Plugins entwickelt haben, können Sie gerne einen Pull Request eröffnen.**
10
+
11
+ Wenn Sie dieses Projekt mögen, geben Sie ihm bitte einen Stern. Wenn Sie weitere nützliche wissenschaftliche Abkürzungen oder funktionale Plugins entwickelt haben, können Sie gerne ein Problem oder eine Pull-Anforderung öffnen. Wir haben auch ein README in [Englisch|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md), das von diesem Projekt selbst übersetzt wurde.
12
+ Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `multi_language.py` (experimentell).
13
+
14
+ > **Hinweis**
15
+ >
16
+ > 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie.
17
+ >
18
+ > 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation).
19
+ >
20
+ > 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung
21
+ --- | ---
22
+ Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten
23
+ Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung
24
+ Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu
25
+ [Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen
26
+ Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
27
+ [Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts
28
+ [Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte
29
+ Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung
30
+ LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels
31
+ Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren
32
+ Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen?
33
+ Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung
34
+ [Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads)
35
+ [Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download
36
+ [Google Scholar Integrations-Assistent](https://www.bilibili.com/video/BV19L411U7ia) | [Funktions-Plugin] Geben Sie eine beliebige Google Scholar Such-URL ein und lassen Sie gpt Ihnen bei der Erstellung von [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) helfen
37
+ Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten
38
+ Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights
39
+ Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/)
40
+ Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/chatgpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren
41
+ [Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder?
42
+ Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/)
43
+ Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments ……
44
+
45
+ - Neue Oberfläche (Ändern Sie die LAYOUT-Option in `config.py`, um zwischen "Seitenlayout" und "Oben-unten-Layout" zu wechseln)
46
+ <div align="center">
47
+ <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
48
+ </div>- All buttons are dynamically generated by reading `functional.py`, and custom functions can be easily added, freeing up the clipboard.
49
+ <div align="center">
50
+ <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
51
+ </div>
52
+
53
+ - Proofreading/Correcting
54
+ <div align="center">
55
+ <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
56
+ </div>
57
+
58
+ - If the output contains formulas, they will be displayed in both tex format and rendered format for easy copying and reading.
59
+ <div align="center">
60
+ <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
61
+ </div>
62
+
63
+ - Don't feel like reading the project code? Show off the entire project to chatgpt.
64
+ <div align="center">
65
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
66
+ </div>
67
+
68
+ - Multiple large language models are mixed and called together (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4).
69
+ <div align="center">
70
+ <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
71
+ </div>
72
+
73
+ ---
74
+ # Installation
75
+ ## Installation-Method 1: Run directly (Windows, Linux or MacOS)
76
+
77
+ 1. Download the project
78
+ ```sh
79
+ git clone https://github.com/binary-husky/chatgpt_academic.git
80
+ cd chatgpt_academic
81
+ ```
82
+
83
+ 2. Configure API_KEY
84
+
85
+ Configure API KEY and other settings in `config.py`. [Special Network Environment Settings](https://github.com/binary-husky/gpt_academic/issues/1).
86
+
87
+ (P.S. When the program is running, it will first check whether there is a "config_private.py" private configuration file, and use the configuration defined in it to override the configuration of "config.py". Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named "config_private.py" next to "config.py" and transfer (copy) the configurations in "config.py" to "config_private.py". "config_private.py" is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` >`config.py`)
88
+
89
+
90
+ 3. Install dependencies
91
+ ```sh
92
+ # (Option I: If familar with Python) (Python version 3.9 or above, the newer the better), Note: Use the official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
93
+ python -m pip install -r requirements.txt
94
+
95
+ # (Option II: If not familiar with Python) Use anaconda with similar steps (https://www.bilibili.com/video/BV1rc411W7Dr):
96
+ conda create -n gptac_venv python=3.11 # Create an anaconda environment
97
+ conda activate gptac_venv # Activate the anaconda environment
98
+ python -m pip install -r requirements.txt # Same step as pip installation
99
+ ```
100
+
101
+ <details><summary>Click to expand if supporting Tsinghua ChatGLM/Fudan MOSS as backend</summary>
102
+ <p>
103
+
104
+ [Optional Step] If supporting Tsinghua ChatGLM/Fudan MOSS as backend, additional dependencies need to be installed (Prerequisites: Familiar with Python + Used Pytorch + Sufficient computer configuration):
105
+ ```sh
106
+ # [Optional Step I] Support Tsinghua ChatGLM. Remark: If encountering "Call ChatGLM fail Cannot load ChatGLM parameters", please refer to the following: 1: The above default installation is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient machine configuration, you can modify the model precision in `request_llm/bridge_chatglm.py`, and modify all AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
107
+ python -m pip install -r request_llm/requirements_chatglm.txt
108
+
109
+ # [Optional Step II] Support Fudan MOSS
110
+ python -m pip install -r request_llm/requirements_moss.txt
111
+ git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the project root path
112
+
113
+ # [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently supported models are as follows (jittorllms series currently only supports docker solutions):
114
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
115
+ ```
116
+
117
+ </p>
118
+ </details>
119
+
120
+
121
+
122
+ 4. Run
123
+ ```sh
124
+ python main.py
125
+ ```5. Testing Function Plugin
126
+ ```
127
+ - Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions
128
+ Click "[Function Plugin Template Demo] Today in History"
129
+ ```
130
+
131
+ ## Installation-Method 2: Using Docker
132
+
133
+ 1. Only ChatGPT (Recommended for most people)
134
+
135
+ ``` sh
136
+ git clone https://github.com/binary-husky/chatgpt_academic.git # Download the project
137
+ cd chatgpt_academic # Enter the path
138
+ nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc.
139
+ docker build -t gpt-academic . # Install
140
+
141
+ # (Last step-option 1) Under Linux environment, use `--net=host` is more convenient and quick
142
+ docker run --rm -it --net=host gpt-academic
143
+ # (Last step-option 2) Under macOS/windows environment, can only use the -p option to expose the container's port(eg.50923) to the port on the host.
144
+ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
145
+ ```
146
+
147
+ 2. ChatGPT + ChatGLM + MOSS (Requires familiarity with Docker)
148
+
149
+ ``` sh
150
+ # Modify docker-compose.yml, delete solution 1 and solution 3, and retain solution 2. Modify the configuration of solution 2 in docker-compose.yml, referring to the comments in it.
151
+ docker-compose up
152
+ ```
153
+
154
+ 3. ChatGPT+LLAMA+Pangu+RWKV(Requires familiarity with Docker)
155
+ ``` sh
156
+ # Modify docker-compose.yml, delete solution 1 and solution 2, and retain solution 3. Modify the configuration of solution 3 in docker-compose.yml, referring to the comments in it.
157
+ docker-compose up
158
+ ```
159
+
160
+
161
+ ## Installation-Method 3: Other Deployment Options
162
+
163
+ 1. How to use reverse proxy URL/Microsoft Azure API
164
+ Configure API_URL_REDIRECT according to the instructions in `config.py`.
165
+
166
+ 2. Remote cloud server deployment (requires cloud server knowledge and experience)
167
+ Please visit [Deployment wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
168
+
169
+ 3. Using WSL 2 (Windows subsystem for Linux)
170
+ Please visit [Deployment wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
171
+
172
+ 4. How to run at a secondary URL (such as `http://localhost/subpath`)
173
+ Please visit [FastAPI operating instructions](docs/WithFastapi.md)
174
+
175
+ 5. Use docker-compose to run
176
+ Please read docker-compose.yml and follow the prompts to operate.
177
+
178
+ ---
179
+ # Advanced Usage
180
+ ## Customize new convenience buttons / custom function plugins.
181
+
182
+ 1. Customize new convenience buttons (Academic Shortcut Keys)
183
+ Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, then the prefix and suffix can be hot-modified, and it will take effect without restarting the program.)
184
+ For example
185
+ ```
186
+ "Super English to Chinese": {
187
+ # Prefix, will be added before your input. For example, used to describe your requirements, such as translation, explaining code, polishing, etc.
188
+ "Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain the proper nouns that appear in the text one by one:\n\n",
189
+
190
+ # Suffix, will be added after your input. For example, combined with prefix, you can enclose your input content in quotes.
191
+ "Suffix": "",
192
+ },
193
+ ```
194
+ <div align="center">
195
+ <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
196
+ </div>
197
+
198
+ 2. Custom function plugins
199
+
200
+ Write powerful function plugins to perform any task you want and can't think of.
201
+ The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided.
202
+ For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
203
+
204
+ ---
205
+ # Latest Update
206
+ ## New feature dynamics1. Funktion zur Speicherung von Dialogen. Rufen Sie im Bereich der Funktions-Plugins "Aktuellen Dialog speichern" auf, um den aktuellen Dialog als lesbares und wiederherstellbares HTML-Datei zu speichern. Darüber hinaus können Sie im Funktions-Plugin-Bereich (Dropdown-Menü) "Laden von Dialogverlauf" aufrufen, um den vorherigen Dialog wiederherzustellen. Tipp: Wenn Sie keine Datei angeben und stattdessen direkt auf "Laden des Dialogverlaufs" klicken, können Sie das HTML-Cache-Archiv anzeigen. Durch Klicken auf "Löschen aller lokalen Dialogverlaufsdatensätze" können alle HTML-Archiv-Caches gelöscht werden.
207
+ <div align="center">
208
+ <img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
209
+ </div>
210
+
211
+ 2. Berichterstellung. Die meisten Plugins generieren nach Abschluss der Ausführung einen Arbeitsbericht.
212
+ <div align="center">
213
+ <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
214
+ <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
215
+ <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
216
+ </div>
217
+
218
+ 3. Modularisierte Funktionsgestaltung, einfache Schnittstellen mit leistungsstarken Funktionen.
219
+ <div align="center">
220
+ <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
221
+ <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
222
+ </div>
223
+
224
+ 4. Dies ist ein Open-Source-Projekt, das sich "selbst übersetzen" kann.
225
+ <div align="center">
226
+ <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
227
+ </div>
228
+
229
+ 5. Die Übersetzung anderer Open-Source-Projekte ist kein Problem.
230
+ <div align="center">
231
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
232
+ </div>
233
+
234
+ <div align="center">
235
+ <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
236
+ </div>
237
+
238
+ 6. Dekorieren Sie [`live2d`](https://github.com/fghrsh/live2d_demo) mit kleinen Funktionen (standardmäßig deaktiviert, Änderungen an `config.py` erforderlich).
239
+ <div align="center">
240
+ <img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
241
+ </div>
242
+
243
+ 7. Neue MOSS-Sprachmodellunterstützung.
244
+ <div align="center">
245
+ <img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
246
+ </div>
247
+
248
+ 8. OpenAI-Bildgenerierung.
249
+ <div align="center">
250
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
251
+ </div>
252
+
253
+ 9. OpenAI-Audio-Analyse und Zusammenfassung.
254
+ <div align="center">
255
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
256
+ </div>
257
+
258
+ 10. Latex-Proofreading des gesamten Textes.
259
+ <div align="center">
260
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
261
+ </div>
262
+
263
+
264
+ ## Version:
265
+ - Version 3.5 (Todo): Rufen Sie alle Funktionserweiterungen dieses Projekts mit natürlicher Sprache auf (hohe Priorität).
266
+ - Version 3.4 (Todo): Verbesserte Unterstützung mehrerer Threads für Local Large Model (LLM).
267
+ - Version 3.3: + Internet-Informationssynthese-Funktion
268
+ - Version 3.2: Funktionserweiterungen unterstützen mehr Parameter-Schnittstellen (Speicherung von Dialogen, Interpretation beliebigen Sprachcodes + gleichzeitige Abfrage jeder LLM-Kombination)
269
+ - Version 3.1: Unterstützung mehrerer GPT-Modelle gleichzeitig! Unterstützung für API2D, Unterstützung für Lastenausgleich von mehreren API-Schlüsseln.
270
+ - Version 3.0: Unterstützung von Chatglm und anderen kleinen LLMs
271
+ - Version 2.6: Umstrukturierung der Plugin-Struktur zur Verbesserung der Interaktivität, Einführung weiterer Plugins
272
+ - Version 2.5: Automatische Aktualisierung, Problembehebung bei Quelltexten großer Projekte, wenn der Text zu lang ist oder Token überlaufen.
273
+ - Version 2.4: (1) Neue Funktion zur Übersetzung des gesamten PDF-Texts; (2) Neue Funktion zum Wechseln der Position des Eingabebereichs; (3) Neue Option für vertikales Layout; (4) Optimierung von Multithread-Funktions-Plugins.
274
+ - Version 2.3: Verbesserte Interaktivität mit mehreren Threads
275
+ - Version 2.2: Funktionserweiterungen unterstützen "Hot-Reload"
276
+ - Version 2.1: Faltbares Layout
277
+ - Version 2.0: Einführung von modularisierten Funktionserweiterungen
278
+ - Version 1.0: Grundlegende Funktionengpt_academic Entwickler QQ-Gruppe-2: 610599535
279
+
280
+ - Bekannte Probleme
281
+ - Einige Browser-Übersetzungs-Plugins können die Frontend-Ausführung dieser Software stören.
282
+ - Sowohl eine zu hohe als auch eine zu niedrige Version von Gradio führt zu verschiedenen Ausnahmen.
283
+
284
+ ## Referenz und Lernen
285
+
286
+ ```
287
+ Der Code bezieht sich auf viele Designs von anderen herausragenden Projekten, insbesondere:
288
+
289
+ # Projekt 1: ChatGLM-6B der Tsinghua Universität:
290
+ https://github.com/THUDM/ChatGLM-6B
291
+
292
+ # Projekt 2: JittorLLMs der Tsinghua Universität:
293
+ https://github.com/Jittor/JittorLLMs
294
+
295
+ # Projekt 3: Edge-GPT:
296
+ https://github.com/acheong08/EdgeGPT
297
+
298
+ # Projekt 4: ChuanhuChatGPT:
299
+ https://github.com/GaiZhenbiao/ChuanhuChatGPT
300
+
301
+ # Projekt 5: ChatPaper:
302
+ https://github.com/kaixindelele/ChatPaper
303
+
304
+ # Mehr:
305
+ https://github.com/gradio-app/gradio
306
+ https://github.com/fghrsh/live2d_demo
307
+ ```
docs/README.md.Italian.md ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ > **Nota**
2
+ >
3
+ > Durante l'installazione delle dipendenze, selezionare rigorosamente le **versioni specificate** nel file requirements.txt.
4
+ >
5
+ > ` pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
6
+
7
+ # <img src="docs/logo.png" width="40" > GPT Ottimizzazione Accademica (GPT Academic)
8
+
9
+ **Se ti piace questo progetto, ti preghiamo di dargli una stella. Se hai sviluppato scorciatoie accademiche o plugin funzionali più utili, non esitare ad aprire una issue o pull request. Abbiamo anche una README in [Inglese|](docs/README_EN.md)[Giapponese|](docs/README_JP.md)[Coreano|](https://github.com/mldljyh/ko_gpt_academic)[Russo|](docs/README_RS.md)[Francese](docs/README_FR.md) tradotta da questo stesso progetto.
10
+ Per tradurre questo progetto in qualsiasi lingua con GPT, leggere e eseguire [`multi_language.py`](multi_language.py) (sperimentale).
11
+
12
+ > **Nota**
13
+ >
14
+ > 1. Si prega di notare che solo i plugin (pulsanti) contrassegnati in **rosso** supportano la lettura di file, alcuni plugin sono posizionati nel **menu a discesa** nella zona dei plugin. Accettiamo e gestiamo PR per qualsiasi nuovo plugin con **massima priorità**!
15
+ >
16
+ > 2. Le funzionalità di ogni file di questo progetto sono descritte dettagliatamente nella propria analisi di autotraduzione [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Con l'iterazione delle versioni, è possibile fare clic sui plugin funzionali correlati in qualsiasi momento per richiamare GPT e generare nuovamente il rapporto di analisi automatica del progetto. Le domande frequenti sono riassunte nella [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Metodo di installazione] (#installazione).
17
+ >
18
+ > 3. Questo progetto è compatibile e incoraggia l'utilizzo di grandi modelli di linguaggio di produzione nazionale come chatglm, RWKV, Pangu ecc. Supporta la coesistenza di più api-key e può essere compilato nel file di configurazione come `API_KEY="openai-key1,openai-key2,api2d-key3"`. Per sostituire temporaneamente `API_KEY`, inserire `API_KEY` temporaneo nell'area di input e premere Invio per renderlo effettivo.
19
+
20
+ <div align="center">Funzione | Descrizione
21
+ --- | ---
22
+ Correzione immediata | Supporta correzione immediata e ricerca degli errori di grammatica del documento con un solo clic
23
+ Traduzione cinese-inglese immediata | Traduzione cinese-inglese immediata con un solo clic
24
+ Spiegazione del codice immediata | Visualizzazione del codice, spiegazione del codice, generazione del codice, annotazione del codice con un solo clic
25
+ [Scorciatoie personalizzate](https://www.bilibili.com/video/BV14s4y1E7jN) | Supporta scorciatoie personalizzate
26
+ Design modularizzato | Supporta potenti [plugin di funzioni](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions) personalizzati, i plugin supportano l'[aggiornamento in tempo reale](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
27
+ [Auto-profiling del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] [Comprensione immediata](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) del codice sorgente di questo progetto
28
+ [Analisi del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] Un clic può analizzare l'albero di altri progetti Python/C/C++/Java/Lua/...
29
+ Lettura del documento, [traduzione](https://www.bilibili.com/video/BV1KT411x7Wn) del documento | [Plugin di funzioni] La lettura immediata dell'intero documento latex/pdf di un documento e la generazione di un riassunto
30
+ Traduzione completa di un documento Latex, [correzione immediata](https://www.bilibili.com/video/BV1FT411H7c5/) | [Plugin di funzioni] Una traduzione o correzione immediata di un documento Latex
31
+ Generazione di annotazioni in batch | [Plugin di funzioni] Generazione automatica delle annotazioni di funzione con un solo clic
32
+ [Traduzione cinese-inglese di Markdown](https://www.bilibili.com/video/BV1yo4y157jV/) | [Plugin di funzioni] Hai letto il [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) delle cinque lingue sopra?
33
+ Generazione di report di analisi di chat | [Plugin di funzioni] Generazione automatica di un rapporto di sintesi dopo l'esecuzione
34
+ [Funzione di traduzione di tutto il documento PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin di funzioni] Estrarre il titolo e il sommario dell'articolo PDF + tradurre l'intero testo (multithreading)
35
+ [Assistente di Arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugin di funzioni] Inserire l'URL dell'articolo di Arxiv e tradurre il sommario con un clic + scaricare il PDF
36
+ [Assistente integrato di Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Plugin di funzioni] Con qualsiasi URL di pagina di ricerca di Google Scholar, lascia che GPT ti aiuti a scrivere il tuo [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
37
+ Aggregazione delle informazioni su Internet + GPT | [Plugin di funzioni] Fai in modo che GPT rilevi le informazioni su Internet prima di rispondere alle domande, senza mai diventare obsolete
38
+ Visualizzazione di formule/img/tabelle | È possibile visualizzare un'equazione in forma [tex e render](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png) contemporaneamente, supporta equazioni e evidenziazione del codice
39
+ Supporto per plugin di funzioni multithreading | Supporto per chiamata multithreaded di chatgpt, elaborazione con un clic di grandi quantità di testo o di un programma
40
+ Avvia il tema di gradio [scuro](https://github.com/binary-husky/chatgpt_academic/issues/173) | Aggiungere ```/?__theme=dark``` dopo l'URL del browser per passare a un tema scuro
41
+ Supporto per maggiori modelli LLM, supporto API2D | Sentirsi serviti simultaneamente da GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) deve essere una grande sensazione, giusto?
42
+ Ulteriori modelli LLM supportat,i supporto per l'implementazione di Huggingface | Aggiunta di un'interfaccia Newbing (Nuovo Bing), introdotta la compatibilità con Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) e [PanGu-α](https://openi.org.cn/pangu/)
43
+ Ulteriori dimostrazioni di nuove funzionalità (generazione di immagini, ecc.)... | Vedere la fine di questo documento...
44
+
45
+ - Nuova interfaccia (modificare l'opzione LAYOUT in `config.py` per passare dal layout a sinistra e a destra al layout superiore e inferiore)
46
+ <div align="center">
47
+ <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
48
+ </div>Sei un traduttore professionista di paper accademici.
49
+
50
+ - Tutti i pulsanti vengono generati dinamicamente leggendo il file functional.py, e aggiungerci nuove funzionalità è facile, liberando la clipboard.
51
+ <div align="center">
52
+ <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
53
+ </div>
54
+
55
+ - Revisione/Correzione
56
+ <div align="center">
57
+ <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
58
+ </div>
59
+
60
+ - Se l'output contiene una formula, viene visualizzata sia come testo che come formula renderizzata, per facilitare la copia e la visualizzazione.
61
+ <div align="center">
62
+ <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
63
+ </div>
64
+
65
+ - Non hai tempo di leggere il codice del progetto? Passa direttamente a chatgpt e chiedi informazioni.
66
+ <div align="center">
67
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
68
+ </div>
69
+
70
+ - Chiamata mista di vari modelli di lingua di grandi dimensioni (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
71
+ <div align="center">
72
+ <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
73
+ </div>
74
+
75
+ ---
76
+ # Installazione
77
+ ## Installazione - Metodo 1: Esecuzione diretta (Windows, Linux o MacOS)
78
+
79
+ 1. Scarica il progetto
80
+ ```sh
81
+ git clone https://github.com/binary-husky/chatgpt_academic.git
82
+ cd chatgpt_academic
83
+ ```
84
+
85
+ 2. Configura API_KEY
86
+
87
+ In `config.py`, configura la tua API KEY e altre impostazioni, [configs for special network environments](https://github.com/binary-husky/gpt_academic/issues/1).
88
+
89
+ (N.B. Quando il programma viene eseguito, verifica prima se esiste un file di configurazione privato chiamato `config_private.py` e sovrascrive le stesse configurazioni in `config.py`. Pertanto, se capisci come funziona la nostra logica di lettura della configurazione, ti consigliamo vivamente di creare un nuovo file di configurazione chiamato `config_private.py` accanto a `config.py`, e spostare (copiare) le configurazioni di `config.py` in `config_private.py`. 'config_private.py' non è sotto la gestione di git e può proteggere ulteriormente le tue informazioni personali. NB Il progetto supporta anche la configurazione della maggior parte delle opzioni tramite "variabili d'ambiente". La sintassi della variabile d'ambiente è descritta nel file `docker-compose`. Priorità di lettura: "variabili d'ambiente" > "config_private.py" > "config.py")
90
+
91
+
92
+ 3. Installa le dipendenze
93
+ ```sh
94
+ # (Scelta I: se sei familiare con python) (python 3.9 o superiore, più nuovo è meglio), N.B.: utilizza il repository ufficiale pip o l'aliyun pip repository, metodo temporaneo per cambiare il repository: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
95
+ python -m pip install -r requirements.txt
96
+
97
+ # (Scelta II: se non conosci Python) utilizza anaconda, il processo è simile (https://www.bilibili.com/video/BV1rc411W7Dr):
98
+ conda create -n gptac_venv python=3.11 # crea l'ambiente anaconda
99
+ conda activate gptac_venv # attiva l'ambiente anaconda
100
+ python -m pip install -r requirements.txt # questo passaggio funziona allo stesso modo dell'installazione con pip
101
+ ```
102
+
103
+ <details><summary>Se si desidera supportare ChatGLM di Tsinghua/MOSS di Fudan come backend, fare clic qui per espandere</summary>
104
+ <p>
105
+
106
+ 【Passaggio facoltativo】 Se si desidera supportare ChatGLM di Tsinghua/MOSS di Fudan come backend, è necessario installare ulteriori dipendenze (prerequisiti: conoscenza di Python, esperienza con Pytorch e computer sufficientemente potente):
107
+ ```sh
108
+ # 【Passaggio facoltativo I】 Supporto a ChatGLM di Tsinghua. Note su ChatGLM di Tsinghua: in caso di errore "Call ChatGLM fail 不能正常加载ChatGLM的参数" , fare quanto segue: 1. Per impostazione predefinita, viene installata la versione di torch + cpu; per usare CUDA, è necessario disinstallare torch e installare nuovamente torch + cuda; 2. Se non è possibile caricare il modello a causa di una configurazione insufficiente del computer, è possibile modificare la precisione del modello in request_llm/bridge_chatglm.py, cambiando AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) in AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
109
+ python -m pip install -r request_llm/requirements_chatglm.txt
110
+
111
+ # 【Passaggio facoltativo II】 Supporto a MOSS di Fudan
112
+ python -m pip install -r request_llm/requirements_moss.txt
113
+ git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Si prega di notare che quando si esegue questa riga di codice, si deve essere nella directory radice del progetto
114
+
115
+ # 【Passaggio facoltativo III】 Assicurati che il file di configurazione config.py includa tutti i modelli desiderati, al momento tutti i modelli supportati sono i seguenti (i modelli della serie jittorllms attualmente supportano solo la soluzione docker):
116
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
117
+ ```
118
+
119
+ </p>
120
+ </details>
121
+
122
+
123
+
124
+ 4. Esegui
125
+ ```sh
126
+ python main.py
127
+ ```5. Plugin di test delle funzioni
128
+ ```
129
+ - Funzione plugin di test (richiede una risposta gpt su cosa è successo oggi in passato), puoi utilizzare questa funzione come template per implementare funzionalità più complesse
130
+ Clicca su "[Demo del plugin di funzione] Oggi nella storia"
131
+ ```
132
+
133
+ ## Installazione - Metodo 2: Utilizzo di Docker
134
+
135
+ 1. Solo ChatGPT (consigliato per la maggior parte delle persone)
136
+
137
+ ``` sh
138
+ git clone https://github.com/binary-husky/chatgpt_academic.git # scarica il progetto
139
+ cd chatgpt_academic # entra nel percorso
140
+ nano config.py # con un qualsiasi editor di testo, modifica config.py configurando "Proxy", "API_KEY" e "WEB_PORT" (ad esempio 50923)
141
+ docker build -t gpt-academic . # installa
142
+
143
+ #(ultimo passaggio - selezione 1) In un ambiente Linux, utilizzare '--net=host' è più conveniente e veloce
144
+ docker run --rm -it --net=host gpt-academic
145
+ #(ultimo passaggio - selezione 2) In un ambiente MacOS/Windows, l'opzione -p può essere utilizzata per esporre la porta del contenitore (ad es. 50923) alla porta della macchina
146
+ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
147
+ ```
148
+
149
+ 2. ChatGPT + ChatGLM + MOSS (richiede familiarità con Docker)
150
+
151
+ ``` sh
152
+ # Modifica docker-compose.yml, elimina i piani 1 e 3, mantieni il piano 2. Modifica la configurazione del piano 2 in docker-compose.yml, si prega di fare riferimento alle relative annotazioni
153
+ docker-compose up
154
+ ```
155
+
156
+ 3. ChatGPT + LLAMA + Pangu + RWKV (richiede familiarità con Docker)
157
+
158
+ ``` sh
159
+ # Modifica docker-compose.yml, elimina i piani 1 e 2, mantieni il piano 3. Modifica la configurazione del piano 3 in docker-compose.yml, si prega di fare riferimento alle relative annotazioni
160
+ docker-compose up
161
+ ```
162
+
163
+
164
+ ## Installazione - Metodo 3: Altre modalità di distribuzione
165
+
166
+ 1. Come utilizzare un URL di reindirizzamento / AzureAPI Cloud Microsoft
167
+ Configura API_URL_REDIRECT seguendo le istruzioni nel file `config.py`.
168
+
169
+ 2. Distribuzione su un server cloud remoto (richiede conoscenze ed esperienza di server cloud)
170
+ Si prega di visitare [wiki di distribuzione-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
171
+
172
+ 3. Utilizzo di WSL2 (Windows Subsystem for Linux)
173
+ Si prega di visitare [wiki di distribuzione-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
174
+
175
+ 4. Come far funzionare ChatGPT all'interno di un sottodominio (ad es. `http://localhost/subpath`)
176
+ Si prega di visitare [Istruzioni per l'esecuzione con FastAPI] (docs/WithFastapi.md)
177
+
178
+ 5. Utilizzo di docker-compose per l'esecuzione
179
+ Si prega di leggere il file docker-compose.yml e seguire le istruzioni fornite.
180
+
181
+ ---
182
+ # Uso avanzato
183
+ ## Personalizzazione dei pulsanti / Plugin di funzione personalizzati
184
+
185
+ 1. Personalizzazione dei pulsanti (scorciatoie accademiche)
186
+ Apri `core_functional.py` con qualsiasi editor di testo e aggiungi la voce seguente, quindi riavvia il programma (se il pulsante è già stato aggiunto con successo e visibile, il prefisso e il suffisso supportano la modifica in tempo reale, senza bisogno di riavviare il programma).
187
+
188
+ ad esempio
189
+ ```
190
+ "超级英译中": {
191
+ # Prefisso, verrà aggiunto prima del tuo input. Ad esempio, descrivi la tua richiesta, come tradurre, spiegare il codice, correggere errori, ecc.
192
+ "Prefix": "Per favore traduci questo testo in Cinese, e poi spiega tutti i termini tecnici nel testo con una tabella markdown:\n\n",
193
+
194
+ # Suffisso, verrà aggiunto dopo il tuo input. Ad esempio, con il prefisso puoi circondare il tuo input con le virgolette.
195
+ "Suffix": "",
196
+ },
197
+ ```
198
+ <div align="center">
199
+ <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
200
+ </div>
201
+
202
+ 2. Plugin di funzione personalizzati
203
+
204
+ Scrivi plugin di funzione personalizzati e esegui tutte le attività che desideri o non hai mai pensato di fare.
205
+ La difficoltà di scrittura e debug dei plugin del nostro progetto è molto bassa. Se si dispone di una certa conoscenza di base di Python, è possibile realizzare la propria funzione del plugin seguendo il nostro modello. Per maggiori dettagli, consultare la [guida al plugin per funzioni] (https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
206
+
207
+ ---
208
+ # Ultimo aggiornamento
209
+ ## Nuove funzionalità dinamiche1. Funzionalità di salvataggio della conversazione. Nell'area dei plugin della funzione, fare clic su "Salva la conversazione corrente" per salvare la conversazione corrente come file html leggibile e ripristinabile, inoltre, nell'area dei plugin della funzione (menu a discesa), fare clic su "Carica la cronologia della conversazione archiviata" per ripristinare la conversazione precedente. Suggerimento: fare clic su "Carica la cronologia della conversazione archiviata" senza specificare il file consente di visualizzare la cache degli archivi html di cronologia, fare clic su "Elimina tutti i record di cronologia delle conversazioni locali" per eliminare tutte le cache degli archivi html.
210
+ <div align="center">
211
+ <img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
212
+ </div>
213
+
214
+ 2. Generazione di rapporti. La maggior parte dei plugin genera un rapporto di lavoro dopo l'esecuzione.
215
+ <div align="center">
216
+ <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
217
+ <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
218
+ <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
219
+ </div>
220
+
221
+ 3. Progettazione modulare delle funzioni, semplici interfacce ma in grado di supportare potenti funzionalità.
222
+ <div align="center">
223
+ <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
224
+ <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
225
+ </div>
226
+
227
+ 4. Questo è un progetto open source che può "tradursi da solo".
228
+ <div align="center">
229
+ <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
230
+ </div>
231
+
232
+ 5. Tradurre altri progetti open source è semplice.
233
+ <div align="center">
234
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
235
+ </div>
236
+
237
+ <div align="center">
238
+ <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
239
+ </div>
240
+
241
+ 6. Piccola funzione decorativa per [live2d](https://github.com/fghrsh/live2d_demo) (disattivata per impostazione predefinita, è necessario modificare `config.py`).
242
+ <div align="center">
243
+ <img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
244
+ </div>
245
+
246
+ 7. Supporto del grande modello linguistico MOSS
247
+ <div align="center">
248
+ <img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
249
+ </div>
250
+
251
+ 8. Generazione di immagini OpenAI
252
+ <div align="center">
253
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
254
+ </div>
255
+
256
+ 9. Analisi e sintesi audio OpenAI
257
+ <div align="center">
258
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
259
+ </div>
260
+
261
+ 10. Verifica completa dei testi in LaTeX
262
+ <div align="center">
263
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
264
+ </div>
265
+
266
+
267
+ ## Versione:
268
+ - versione 3.5(Todo): utilizzo del linguaggio naturale per chiamare tutti i plugin di funzioni del progetto (alta priorità)
269
+ - versione 3.4(Todo): supporto multi-threading per il grande modello linguistico locale Chatglm
270
+ - versione 3.3: +funzionalità di sintesi delle informazioni su Internet
271
+ - versione 3.2: i plugin di funzioni supportano più interfacce dei parametri (funzionalità di salvataggio della conversazione, lettura del codice in qualsiasi lingua + richiesta simultanea di qualsiasi combinazione di LLM)
272
+ - versione 3.1: supporto per interrogare contemporaneamente più modelli gpt! Supporto api2d, bilanciamento del carico per più apikey
273
+ - versione 3.0: supporto per Chatglm e altri piccoli LLM
274
+ - versione 2.6: ristrutturazione della struttura del plugin, miglioramento dell'interattività, aggiunta di più plugin
275
+ - versione 2.5: auto-aggiornamento, risoluzione del problema di testo troppo lungo e overflow del token durante la sintesi di grandi progetti di ingegneria
276
+ - versione 2.4: (1) funzionalità di traduzione dell'intero documento in formato PDF aggiunta; (2) funzionalità di scambio dell'area di input aggiunta; (3) opzione di layout verticale aggiunta; (4) ottimizzazione della funzione di plugin multi-threading.
277
+ - versione 2.3: miglioramento dell'interattività multi-threading
278
+ - versione 2.2: i plugin di funzioni supportano l'hot-reload
279
+ - versione 2.1: layout ripiegabile
280
+ - versione 2.0: introduzione di plugin di funzioni modulari
281
+ - versione 1.0: funzione di basegpt_academic sviluppatori gruppo QQ-2: 610599535
282
+
283
+ - Problemi noti
284
+ - Alcuni plugin di traduzione del browser interferiscono con l'esecuzione del frontend di questo software
285
+ - La versione di gradio troppo alta o troppo bassa può causare diversi malfunzionamenti
286
+
287
+ ## Riferimenti e apprendimento
288
+
289
+ ```
290
+ Il codice fa riferimento a molte altre eccellenti progettazioni di progetti, principalmente:
291
+
292
+ # Progetto 1: ChatGLM-6B di Tsinghua:
293
+ https://github.com/THUDM/ChatGLM-6B
294
+
295
+ # Progetto 2: JittorLLMs di Tsinghua:
296
+ https://github.com/Jittor/JittorLLMs
297
+
298
+ # Progetto 3: Edge-GPT:
299
+ https://github.com/acheong08/EdgeGPT
300
+
301
+ # Progetto 4: ChuanhuChatGPT:
302
+ https://github.com/GaiZhenbiao/ChuanhuChatGPT
303
+
304
+ # Progetto 5: ChatPaper:
305
+ https://github.com/kaixindelele/ChatPaper
306
+
307
+ # Altro:
308
+ https://github.com/gradio-app/gradio
309
+ https://github.com/fghrsh/live2d_demo
310
+ ```
docs/README.md.Korean.md ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ > **노트**
2
+ >
3
+ > 의존성을 설치할 때는 반드시 requirements.txt에서 **지정된 버전**을 엄격하게 선택하십시오.
4
+ >
5
+ > `pip install -r requirements.txt`
6
+
7
+ # <img src="docs/logo.png" width="40" > GPT 학술 최적화 (GPT Academic)
8
+
9
+ **이 프로젝트가 마음에 드신다면 Star를 주세요. 추가로 유용한 학술 단축키나 기능 플러그인이 있다면 이슈나 pull request를 남기세요. 이 프로젝트에 대한 [영어 |](docs/README_EN.md)[일본어 |](docs/README_JP.md)[한국어 |](https://github.com/mldljyh/ko_gpt_academic)[러시아어 |](docs/README_RS.md)[프랑스어](docs/README_FR.md)로 된 README도 있습니다.
10
+ GPT를 이용하여 프로젝트를 임의의 언어로 번역하려면 [`multi_language.py`](multi_language.py)를 읽고 실행하십시오. (실험적)
11
+
12
+ > **노트**
13
+ >
14
+ > 1. 파일을 읽기 위해 **빨간색**으로 표시된 기능 플러그인 (버튼) 만 지원됩니다. 일부 플러그인은 플러그인 영역의 **드롭다운 메뉴**에 있습니다. 또한 새로운 플러그인은 **가장 높은 우선순위**로 환영하며 처리합니다!
15
+ >
16
+ > 2. 이 프로젝트의 각 파일의 기능을 [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)에서 자세히 설명합니다. 버전이 업데이트 됨에 따라 관련된 기능 플러그인을 클릭하고 GPT를 호출하여 프로젝트의 자체 분석 보고서를 다시 생성할 수도 있습니다. 자주 묻는 질문은 [`위키`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)에서 볼 수 있습니다. [설치 방법](#installation).
17
+ >
18
+ > 3. 이 프로젝트는 국내 언어 모델 chatglm과 RWKV, 판고 등의 시도와 호환 가능합니다. 여러 개의 api-key를 지원하며 설정 파일에 "API_KEY="openai-key1,openai-key2,api2d-key3""와 같이 작성할 수 있습니다. `API_KEY`를 임시로 변경해야하는 경우 입력 영역에 임시 `API_KEY`를 입력 한 후 엔터 키를 누르면 즉시 적용됩니다.
19
+
20
+ <div align="center">기능 | 설명
21
+ --- | ---
22
+ 원 키워드 | 원 키워드 및 논문 문법 오류를 찾는 기능 지원
23
+ 한-영 키워드 | 한-영 키워드 지원
24
+ 코드 설명 | 코드 표시, 코드 설명, 코드 생성, 코드에 주석 추가
25
+ [사용자 정의 바로 가기 키](https://www.bilibili.com/video/BV14s4y1E7jN) | 사용자 정의 바로 가기 키 지원
26
+ 모듈식 설계 | 강력한[함수 플러그인](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions) 지원, 플러그인이 [램 업데이트](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)를 지원합니다.
27
+ [자체 프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] [원 키 우드] 프로젝트 소스 코드의 내용을 이해하는 기능을 제공
28
+ [프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] 프로젝트 트리를 분석할 수 있습니다 (Python/C/C++/Java/Lua/...)
29
+ 논문 읽기, 번역 | [함수 플러그인] LaTex/PDF 논문의 전문을 읽고 요약을 생성합니다.
30
+ LaTeX 텍스트[번역](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [원 키워드](https://www.bilibili.com/video/BV1FT411H7c5/) | [함수 플러그인] LaTeX 논문의 번역 또는 개량을 위해 일련의 모드를 번역할 수 있습니다.
31
+ 대량의 주석 생성 | [함수 플러그인] 함수 코멘트를 대량으로 생성할 수 있습니다.
32
+ Markdown 한-영 번역 | [함수 플러그인] 위의 5 종 언어의 [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)를 볼 수 있습니다.
33
+ chat 분석 보고서 생성 | [함수 플러그인] 수행 후 요약 보고서를 자동으로 생성합니다.
34
+ [PDF 논문 번역](https://www.bilibili.com/video/BV1KT411x7Wn) | [함수 플러그인] PDF 논문이 제목 및 요약을 추출한 후 번역됩니다. (멀티 스레드)
35
+ [Arxiv 도우미](https://www.bilibili.com/video/BV1LM4y1279X) | [함수 플러그인] Arxiv 논문 URL을 입력하면 요약을 번역하고 PDF를 다운로드 할 수 있습니다.
36
+ [Google Scholar 통합 도우미](https://www.bilibili.com/video/BV19L411U7ia) | [함수 플러그인] Google Scholar 검색 페이지 URL을 제공하면 gpt가 [Related Works 작성](https://www.bilibili.com/video/BV1GP411U7Az/)을 도와줍니다.
37
+ 인터넷 정보 집계+GPT | [함수 플러그인] 먼저 GPT가 인터넷에서 정보를 수집하고 질문에 대답 할 수 있도록합니다. 정보가 절대적으로 구식이 아닙니다.
38
+ 수식/이미지/표 표시 | 급여, 코드 강조 기능 지원
39
+ 멀티 스레드 함수 플러그인 지원 | Chatgpt를 여러 요청에서 실행하여 [대량의 텍스트](https://www.bilibili.com/video/BV1FT411H7c5/) 또는 프로그램을 처리 할 수 있습니다.
40
+ 다크 그라디오 테마 시작 | 어둡게 주제를 변경하려면 브라우저 URL 끝에 ```/?__theme=dark```을 추가하면됩니다.
41
+ [다중 LLM 모델](https://www.bilibili.com/video/BV1wT411p7yf) 지원, [API2D](https://api2d.com/) 인터페이스 지원됨 | GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)가 모두 동시에 작동하는 것처럼 느낄 수 있습니다!
42
+ LLM 모델 추가 및[huggingface 배치](https://huggingface.co/spaces/qingxu98/gpt-academic) 지원 | 새 Bing 인터페이스 (새 Bing) 추가, Clearing House [Jittorllms](https://github.com/Jittor/JittorLLMs) 지원 [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) 및 [盘古α](https://openi.org.cn/pangu/)
43
+ 기타 새로운 기능 (이미지 생성 등) ... | 이 문서의 끝부분을 참조하세요. ...- 모든 버튼은 functional.py를 동적으로 읽어와서 사용자 정의 기능을 자유롭게 추가할 수 있으며, 클립 보드를 해제합니다.
44
+ <div align="center">
45
+ <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
46
+ </div>
47
+
48
+ - 검수/오타 교정
49
+ <div align="center">
50
+ <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
51
+ </div>
52
+
53
+ - 출력에 수식이 포함되어 있으면 텍스와 렌더링의 형태로 동시에 표시되어 복사 및 읽기가 용이합니다.
54
+ <div align="center">
55
+ <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
56
+ </div>
57
+
58
+ - 프로젝트 코드를 볼 시간이 없습니까? 전체 프로젝트를 chatgpt에 직접 표시하십시오
59
+ <div align="center">
60
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
61
+ </div>
62
+
63
+ - 다양한 대형 언어 모델 범용 요청 (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
64
+ <div align="center">
65
+ <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
66
+ </div>
67
+
68
+ ---
69
+ # 설치
70
+ ## Installation-Method 1: Run directly (Windows, Linux or MacOS)
71
+
72
+ 1. 프로젝트 다운로드
73
+ ```sh
74
+ git clone https://github.com/binary-husky/chatgpt_academic.git
75
+ cd chatgpt_academic
76
+ ```
77
+
78
+ 2. API_KEY 구성
79
+
80
+ `config.py`에서 API KEY 등 설정을 구성합니다. [특별한 네트워크 환경 설정](https://github.com/binary-husky/gpt_academic/issues/1) .
81
+
82
+ (P.S. 프로그램이 실행될 때, 이름이 `config_private.py`인 기밀 설정 파일이 있는지 우선적으로 확인하고 해당 설정으로 `config.py`의 동일한 이름의 설정을 덮어씁니다. 따라서 구성 읽기 논리를 이해할 수 있다면, `config.py` 옆에 `config_private.py`라는 새 구성 파일을 만들고 `config.py`의 구성을 `config_private.py`로 이동(복사)하는 것이 좋습니다. `config_private.py`는 git으로 관리되지 않으며 개인 정보를 더 안전하게 보호할 수 있습니다. P.S. 프로젝트는 또한 대부분의 옵션을 `환경 변수`를 통해 설정할 수 있으며, `docker-compose` 파일을 참조하여 환경 변수 작성 형식을 확인할 수 있습니다. 우선순위: `환경 변수` > `config_private.py` > `config.py`)
83
+
84
+
85
+ 3. 의존성 설치
86
+ ```sh
87
+ # (I 선택: 기존 python 경험이 있다면) (python 버전 3.9 이상, 최신 버전이 좋습니다), 참고: 공식 pip 소스 또는 알리 pip 소스 사용, 일시적인 교체 방법: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
88
+ python -m pip install -r requirements.txt
89
+
90
+ # (II 선택: Python에 익숙하지 않은 경우) anaconda 사용 방법은 비슷함(https://www.bilibili.com/video/BV1rc411W7Dr):
91
+ conda create -n gptac_venv python=3.11 # anaconda 환경 만들기
92
+ conda activate gptac_venv # anaconda 환경 활성화
93
+ python -m pip install -r requirements.txt # 이 단계도 pip install의 단계와 동일합니다.
94
+ ```
95
+
96
+ <details><summary>추가지원을 위해 Tsinghua ChatGLM / Fudan MOSS를 사용해야하는 경우 지원을 클릭하여 이 부분을 확장하세요.</summary>
97
+ <p>
98
+
99
+ [Tsinghua ChatGLM] / [Fudan MOSS]를 백엔드로 사용하려면 추가적인 종속성을 설치해야합니다 (전제 조건 : Python을 이해하고 Pytorch를 사용한 적이 있으며, 컴퓨터가 충분히 강력한 경우) :
100
+ ```sh
101
+ # [선택 사항 I] Tsinghua ChatGLM을 지원합니다. Tsinghua ChatGLM에 대한 참고사항 : "Call ChatGLM fail cannot load ChatGLM parameters normally" 오류 발생시 다음 참조:
102
+ # 1 : 기본 설치된 것들은 torch + cpu 버전입니다. cuda를 사용하려면 torch를 제거한 다음 torch + cuda를 다시 설치해야합니다.
103
+ # 2 : 모델을 로드할 수 없는 기계 구성 때문에, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)를
104
+ # AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)로 변경합니다.
105
+ python -m pip install -r request_llm/requirements_chatglm.txt
106
+
107
+ # [선택 사항 II] Fudan MOSS 지원
108
+ python -m pip install -r request_llm/requirements_moss.txt
109
+ git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 다음 코드 줄을 실행할 때 프로젝트 루트 경로에 있어야합니다.
110
+
111
+ # [선택 사항III] AVAIL_LLM_MODELS config.py 구성 파일에 기대하는 모델이 포함되어 있는지 확인하십시오.
112
+ # 현재 지원되는 전체 모델 :
113
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
114
+ ```
115
+
116
+ </p>
117
+ </details>
118
+
119
+
120
+
121
+ 4. 실행
122
+ ```sh
123
+ python main.py
124
+ ```5. 테스트 함수 플러그인
125
+ ```
126
+ - 테스트 함수 플러그인 템플릿 함수 (GPT에게 오늘의 역사에서 무슨 일이 일어났는지 대답하도록 요청)를 구현하는 데 사용할 수 있습니다. 이 함수를 기반으로 더 복잡한 기능을 구현할 수 있습니다.
127
+ "[함수 플러그인 템플릿 데모] 오늘의 역사"를 클릭하세요.
128
+ ```
129
+
130
+ ## 설치 - 방법 2 : 도커 사용
131
+
132
+ 1. ChatGPT 만 (대부분의 사람들이 선택하는 것을 권장합니다.)
133
+
134
+ ``` sh
135
+ git clone https://github.com/binary-husky/chatgpt_academic.git # 다운로드
136
+ cd chatgpt_academic # 경로 이동
137
+ nano config.py # 아무 텍스트 에디터로 config.py를 열고 "Proxy","API_KEY","WEB_PORT" (예 : 50923) 등을 구성합니다.
138
+ docker build -t gpt-academic . # 설치
139
+
140
+ #(마지막 단계-1 선택) Linux 환경에서는 --net=host를 사용하면 더 편리합니다.
141
+ docker run --rm -it --net=host gpt-academic
142
+ #(마지막 단계-2 선택) macOS / windows 환경에서는 -p 옵션을 사용하여 컨테이너의 포트 (예 : 50923)를 호스트의 포트로 노출해야합니다.
143
+ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
144
+ ```
145
+
146
+ 2. ChatGPT + ChatGLM + MOSS (Docker에 익숙해야합니다.)
147
+
148
+ ``` sh
149
+ #docker-compose.yml을 수정하여 계획 1 및 계획 3을 삭제하고 계획 2를 유지합니다. docker-compose.yml에서 계획 2의 구성을 수정하면 됩니다. 주석을 참조하십시오.
150
+ docker-compose up
151
+ ```
152
+
153
+ 3. ChatGPT + LLAMA + Pangu + RWKV (Docker에 익숙해야합니다.)
154
+ ``` sh
155
+ #docker-compose.yml을 수정하여 계획 1 및 계획 2을 삭제하고 계획 3을 유지합니다. docker-compose.yml에서 계획 3의 구성을 수정하면 됩니다. 주석을 참조하십시오.
156
+ docker-compose up
157
+ ```
158
+
159
+
160
+ ## 설치 - 방법 3 : 다른 배치 방법
161
+
162
+ 1. 리버스 프록시 URL / Microsoft Azure API 사용 방법
163
+ API_URL_REDIRECT를 `config.py`에 따라 구성하면됩니다.
164
+
165
+ 2. 원격 클라우드 서버 배치 (클라우드 서버 지식과 경험이 필요합니다.)
166
+ [배치위키-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)에 방문하십시오.
167
+
168
+ 3. WSL2 사용 (Windows Subsystem for Linux 하위 시스템)
169
+ [배치 위키-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)에 방문하십시오.
170
+
171
+ 4. 2 차 URL (예 : `http : //localhost/subpath`)에서 실행하는 방법
172
+ [FastAPI 실행 설명서] (docs / WithFastapi.md)를 참조하십시오.
173
+
174
+ 5. docker-compose 실행
175
+ docker-compose.yml을 읽은 후 지시 사항에 따라 작업하십시오.
176
+ ---
177
+ # 고급 사용법
178
+ ## 사용자 정의 바로 가기 버튼 / 사용자 정의 함수 플러그인
179
+
180
+ 1. 사용자 정의 바로 가기 버튼 (학술 바로 가기)
181
+ 임의의 텍스트 편집기로 'core_functional.py'를 엽니다. 엔트리 추가, 그런 다음 프로그램을 다시 시작하면됩니다. (버튼이 이미 추가되어 보이고 접두사, 접미사가 모두 변수가 효과적으로 수정되면 프로그램을 다시 시작하지 않아도됩니다.)
182
+ 예 :
183
+ ```
184
+ "超级英译中": {
185
+ # 접두사. 당신이 요구하는 것을 설명하는 데 사용됩니다. 예를 들어 번역, 코드를 설명, 다듬기 등
186
+ "Prefix": "下面翻译成中文,然后用一个 markdown 表格逐一解释文中出现的专有名词:\n\n",
187
+
188
+ # 접미사는 입력 내용 앞뒤에 추가됩니다. 예를 들어 전위를 사용하여 입력 내용을 따옴표로 묶는데 사용할 수 있습니다.
189
+ "Suffix": "",
190
+ },
191
+ ```
192
+ <div align="center">
193
+ <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
194
+ </div>
195
+
196
+ 2. 사용자 지정 함수 플러그인
197
+ 강력한 함수 플러그인을 작성하여 원하는 작업을 수행하십시오.
198
+ 이 프로젝트의 플러그인 작성 및 디버깅 난이도는 매우 낮으���, 일부 파이썬 기본 지식만 있으면 제공된 템플릿을 모방하여 플러그인 기능을 구현할 수 있습니다. 자세한 내용은 [함수 플러그인 가이드]를 참조하십시오. (https://github.com/binary -husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E 4%BB%B6%E6%8C%87%E5%8D%97).
199
+ ---
200
+ # 최신 업데이트
201
+ ## 새로운 기능 동향1. 대화 저장 기능.
202
+
203
+ 1. 함수 플러그인 영역에서 '현재 대화 저장'을 호출하면 현재 대화를 읽을 수 있고 복원 가능한 HTML 파일로 저장할 수 있습니다. 또한 함수 플러그인 영역(드롭다운 메뉴)에서 '대화 기록 불러오기'를 호출하면 이전 대화를 복원할 수 있습니다. 팁: 파일을 지정하지 않고 '대화 기록 불러오기'를 클릭하면 기록된 HTML 캐시를 볼 수 있으며 '모든 로컬 대화 기록 삭제'를 클릭하면 모든 HTML 캐시를 삭제할 수 있습니다.
204
+
205
+ 2. 보고서 생성. 대부분의 플러그인은 실행이 끝난 후 작업 보고서를 생성합니다.
206
+
207
+ 3. 모듈화 기능 설계, 간단한 인터페이스로도 강력한 기능을 지원할 수 있습니다.
208
+
209
+ 4. 자체 번역이 가능한 오픈 소스 프로젝트입니다.
210
+
211
+ 5. 다른 오픈 소스 프로젝트를 번역하는 것은 어렵지 않습니다.
212
+
213
+ 6. [live2d](https://github.com/fghrsh/live2d_demo) 장식 기능(기본적으로 비활성화되어 있으며 `config.py`를 수정해야 합니다.)
214
+
215
+ 7. MOSS 대 언어 모델 지원 추가
216
+
217
+ 8. OpenAI 이미지 생성
218
+
219
+ 9. OpenAI 음성 분석 및 요약
220
+
221
+ 10. LaTeX 전체적인 교정 및 오류 수정
222
+
223
+ ## 버전:
224
+ - version 3.5 (TODO): 자연어를 사용하여 이 프로젝트의 모든 함수 플러그인을 호출하는 기능(우선순위 높음)
225
+ - version 3.4(TODO): 로컬 대 모듈의 다중 스레드 지원 향상
226
+ - version 3.3: 인터넷 정보 종합 기능 추가
227
+ - version 3.2: 함수 플러그인이 더 많은 인수 인터페이스를 지원합니다.(대화 저장 기능, 임의의 언어 코드 해석 및 동시에 임의의 LLM 조합을 확인하는 기능)
228
+ - version 3.1: 여러 개의 GPT 모델에 대한 동시 쿼리 지원! api2d 지원, 여러 개의 apikey 로드 밸런싱 지원
229
+ - version 3.0: chatglm 및 기타 소형 llm의 지원
230
+ - version 2.6: 플러그인 구조를 재구성하여 상호 작용성을 향상시켰습니다. 더 많은 플러그인을 추가했습니다.
231
+ - version 2.5: 자체 업데이트, 전체 프로젝트를 요약할 때 텍스트가 너무 길어지고 토큰이 오버플로우되는 문제를 해결했습니다.
232
+ - version 2.4: (1) PDF 전체 번역 기능 추가; (2) 입력 영역 위치 전환 기능 추가; (3) 수직 레이아웃 옵션 추가; (4) 다중 스레드 함수 플러그인 최적화.
233
+ - version 2.3: 다중 스레드 상호 작용성 강화
234
+ - version 2.2: 함수 플러그인 히트 리로드 지원
235
+ - version 2.1: 접는 레이아웃 지원
236
+ - version 2.0: 모듈화 함수 플러그인 도입
237
+ - version 1.0: 기본 기능
238
+
239
+ gpt_academic 개발자 QQ 그룹-2 : 610599535
240
+
241
+ - 알려진 문제
242
+ - 일부 브라우저 번역 플러그인이이 소프트웨어의 프론트 엔드 작동 방식을 방해합니다.
243
+ - gradio 버전이 너무 높거나 낮으면 여러 가지 이상이 발생할 수 있습니다.
244
+
245
+ ## 참고 및 학습 자료
246
+
247
+ ```
248
+ 많은 우수 프로젝트의 디자인을 참고했습니다. 주요 항목은 다음과 같습니다.
249
+
250
+ # 프로젝트 1 : Tsinghua ChatGLM-6B :
251
+ https://github.com/THUDM/ChatGLM-6B
252
+
253
+ # 프로젝트 2 : Tsinghua JittorLLMs:
254
+ https://github.com/Jittor/JittorLLMs
255
+
256
+ # 프로젝트 3 : Edge-GPT :
257
+ https://github.com/acheong08/EdgeGPT
258
+
259
+ # 프로젝트 4 : ChuanhuChatGPT:
260
+ https://github.com/GaiZhenbiao/ChuanhuChatGPT
261
+
262
+ # 프로젝트 5 : ChatPaper :
263
+ https://github.com/kaixindelele/ChatPaper
264
+
265
+ # 더 많은 :
266
+ https://github.com/gradio-app/gradio
267
+ https://github.com/fghrsh/live2d_demo
268
+ ```
docs/README_EN.md CHANGED
@@ -2,204 +2,192 @@
2
  >
3
  > This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
4
  >
 
 
 
5
 
6
- # <img src="logo.png" width="40" > ChatGPT Academic Optimization
7
 
8
- **If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a [README in English](docs/README_EN.md) translated by this project itself.**
 
9
 
10
- > **Note**
11
- >
12
- > 1. Please note that only **functions with red color** supports reading files, some functions are located in the **dropdown menu** of plugins. Additionally, we welcome and prioritize any new plugin PRs with **highest priority**!
13
- >
14
- > 2. The functionality of each file in this project is detailed in the self-translation report [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the project. With the iteration of the version, you can also click on the relevant function plugins at any time to call GPT to regenerate the self-analysis report of the project. The FAQ summary is in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) section.
15
  >
 
 
 
16
 
17
-
18
- <div align="center">
19
-
20
- Function | Description
21
  --- | ---
22
- One-Click Polish | Supports one-click polishing and finding grammar errors in academic papers.
23
- One-Key Translation Between Chinese and English | One-click translation between Chinese and English.
24
- One-Key Code Interpretation | Can correctly display and interpret code.
25
- [Custom Shortcut Keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
26
- [Configure Proxy Server](https://www.bilibili.com/video/BV1rc411W7Dr) | Supports configuring proxy servers.
27
- Modular Design | Supports custom high-order function plugins and [function plugins], and plugins support [hot updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
28
- [Self-programming Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] [One-Key Read] (https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) The source code of this project is analyzed.
29
- [Program Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] One-click can analyze the project tree of other Python/C/C++/Java/Lua/... projects
30
- Read the Paper | [Function Plugin] One-click interpretation of the full text of latex paper and generation of abstracts
31
- Latex Full Text Translation, Proofreading | [Function Plugin] One-click translation or proofreading of latex papers.
32
- Batch Comment Generation | [Function Plugin] One-click batch generation of function comments
33
- Chat Analysis Report Generation | [Function Plugin] After running, an automatic summary report will be generated
34
- [Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function Plugin] Enter the arxiv article url to translate the abstract and download the PDF with one click
35
- [Full-text Translation Function of PDF Paper](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function Plugin] Extract the title & abstract of the PDF paper + translate the full text (multithreading)
36
- [Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function Plugin] Given any Google Scholar search page URL, let gpt help you choose interesting articles.
37
- Formula / Picture / Table Display | Can display both the tex form and the rendering form of formulas at the same time, support formula and code highlighting
38
- Multithreaded Function Plugin Support | Supports multi-threaded calling chatgpt, one-click processing of massive text or programs
39
- Start Dark Gradio [Theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__dark-theme=true``` at the end of the browser url to switch to dark theme
40
- [Multiple LLM Models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | It must feel nice to be served by both GPT3.5, GPT4, and [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B)!
41
- Huggingface non-Science Net [Online Experience](https://huggingface.co/spaces/qingxu98/gpt-academic) | After logging in to huggingface, copy [this space](https://huggingface.co/spaces/qingxu98/gpt-academic)
42
- ... | ...
43
-
44
- </div>
45
-
46
-
47
- - New interface (switch between "left-right layout" and "up-down layout" by modifying the LAYOUT option in config.py)
48
  <div align="center">
49
  <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
50
- </div>
51
-
52
-
53
- - All buttons are dynamically generated by reading functional.py and can add custom functionality at will, freeing up clipboard
54
  <div align="center">
55
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
56
  </div>
57
 
58
- - Proofreading / correcting
59
  <div align="center">
60
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
61
  </div>
62
 
63
- - If the output contains formulas, it will be displayed in both the tex form and the rendering form at the same time, which is convenient for copying and reading
64
  <div align="center">
65
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
66
  </div>
67
 
68
- - Don't want to read the project code? Just take the whole project to chatgpt
69
  <div align="center">
70
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
71
  </div>
72
 
73
- - Multiple major language model mixing calls (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
74
  <div align="center">
75
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
76
  </div>
77
 
78
- Multiple major language model mixing call [huggingface beta version](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (the huggingface version does not support chatglm)
79
-
80
-
81
  ---
 
 
82
 
83
- ## Installation-Method 1: Run directly (Windows, Linux or MacOS)
84
-
85
- 1. Download project
86
  ```sh
87
  git clone https://github.com/binary-husky/chatgpt_academic.git
88
  cd chatgpt_academic
89
  ```
90
 
91
- 2. Configure API_KEY and proxy settings
92
 
 
93
 
94
- In `config.py`, configure the overseas Proxy and OpenAI API KEY as follows:
95
- ```
96
- 1. If you are in China, you need to set up an overseas proxy to use the OpenAI API smoothly. Please read config.py carefully for setup details (1. Modify USE_PROXY to True; 2. Modify proxies according to the instructions).
97
- 2. Configure the OpenAI API KEY. You need to register and obtain an API KEY on the OpenAI website. Once you get the API KEY, you can configure it in the config.py file.
98
- 3. Issues related to proxy networks (network timeouts, proxy failures) are summarized at https://github.com/binary-husky/chatgpt_academic/issues/1
99
- ```
100
- (P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py` and use the same-name configuration in `config.py` to overwrite it. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configuration in `config.py` to` config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure.))
101
 
102
 
103
- 3. Install dependencies
104
  ```sh
105
- # (Option One) Recommended
106
- python -m pip install -r requirements.txt
107
-
108
- # (Option Two) If you use anaconda, the steps are similar:
109
- # (Option Two.1) conda create -n gptac_venv python=3.11
110
- # (Option Two.2) conda activate gptac_venv
111
- # (Option Two.3) python -m pip install -r requirements.txt
112
 
113
- # Note: Use official pip source or Ali pip source. Other pip sources (such as some university pips) may have problems, and temporary replacement methods are as follows:
114
- # python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
 
 
115
  ```
116
 
117
- If you need to support Tsinghua ChatGLM, you need to install more dependencies (if you are not familiar with python or your computer configuration is not good, we recommend not to try):
 
 
 
118
  ```sh
119
- python -m pip install -r request_llm/requirements_chatglm.txt
 
 
 
 
 
 
 
 
120
  ```
121
 
122
- 4. Run
 
 
 
 
 
123
  ```sh
124
  python main.py
 
125
  ```
126
-
127
- 5. Test function plugins
128
- ```
129
- - Test Python project analysis
130
- In the input area, enter `./crazy_functions/test_project/python/dqn`, and then click "Analyze the entire Python project"
131
- - Test self-code interpretation
132
- Click "[Multithreading Demo] Interpretation of This Project Itself (Source Code Interpretation)"
133
- - Test experimental function template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
134
  Click "[Function Plugin Template Demo] Today in History"
135
- - There are more functions to choose from in the function plugin area drop-down menu.
136
  ```
137
 
138
- ## Installation-Method 2: Use Docker (Linux)
 
 
139
 
140
- 1. ChatGPT only (recommended for most people)
141
  ``` sh
142
- # download project
143
- git clone https://github.com/binary-husky/chatgpt_academic.git
144
- cd chatgpt_academic
145
- # configure overseas Proxy and OpenAI API KEY
146
- Edit config.py with any text editor
147
- # Install
148
- docker build -t gpt-academic .
149
- # Run
150
  docker run --rm -it --net=host gpt-academic
 
 
 
151
 
152
- # Test function plug-in
153
- ## Test function plugin template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
154
- Click "[Function Plugin Template Demo] Today in History"
155
- ## Test Abstract Writing for Latex Projects
156
- Enter ./crazy_functions/test_project/latex/attention in the input area, and then click "Read Tex Paper and Write Abstract"
157
- ## Test Python Project Analysis
158
- Enter ./crazy_functions/test_project/python/dqn in the input area and click "Analyze the entire Python project."
159
 
160
- More functions are available in the function plugin area drop-down menu.
 
 
161
  ```
162
 
163
- 2. ChatGPT+ChatGLM (requires strong familiarity with docker + strong computer configuration)
164
 
165
  ``` sh
166
- # Modify dockerfile
167
- cd docs && nano Dockerfile+ChatGLM
168
- # How to build | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
169
- docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
170
- # How to run | 如何运行 (1) 直接运行:
171
- docker run --rm -it --net=host --gpus=all gpt-academic
172
- # How to run | 如何运行 (2) 我想运行之前进容器做一些调整:
173
- docker run --rm -it --net=host --gpus=all gpt-academic bash
174
  ```
175
 
 
176
 
177
- ## Installation-Method 3: Other Deployment Methods
 
178
 
179
- 1. Remote Cloud Server Deployment
180
- Please visit [Deployment Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
181
 
182
- 2. Use WSL2 (Windows Subsystem for Linux)
183
  Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
184
 
 
 
185
 
186
- ## Installation-Proxy Configuration
187
- ### Method 1: Conventional method
188
- [Configure Proxy](https://github.com/binary-husky/chatgpt_academic/issues/1)
189
-
190
- ### Method Two: Step-by-step tutorial for newcomers
191
- [Step-by-step tutorial for newcomers](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
192
 
193
  ---
 
 
194
 
195
- ## Customizing Convenient Buttons (Customizing Academic Shortcuts)
196
- Open `core_functional.py` with any text editor and add an item as follows, then restart the program (if the button has been successfully added and visible, both the prefix and suffix support hot modification without the need to restart the program to take effect). For example:
 
197
  ```
198
- "Super English to Chinese translation": {
199
- # Prefix, which will be added before your input. For example, to describe your requirements, such as translation, code interpretation, polishing, etc.
200
- "Prefix": "Please translate the following content into Chinese and use a markdown table to interpret the proprietary terms in the text one by one:\n\n",
201
-
202
- # Suffix, which will be added after your input. For example, combined with the prefix, you can put your input content in quotes.
203
  "Suffix": "",
204
  },
205
  ```
@@ -207,85 +195,125 @@ Open `core_functional.py` with any text editor and add an item as follows, then
207
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
208
  </div>
209
 
210
- ---
211
 
 
 
 
212
 
213
- ## Some Function Displays
 
 
 
214
 
215
- ### Image Display:
 
 
216
 
217
 
218
- You are a professional academic paper translator.
219
 
220
  <div align="center">
221
- <img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
 
 
222
  </div>
223
 
224
- ### If a program can understand and analyze itself:
 
225
 
226
  <div align="center">
227
- <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
 
228
  </div>
229
 
 
 
 
230
  <div align="center">
231
- <img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
232
  </div>
233
 
234
- ### Analysis of any Python/Cpp project:
 
235
  <div align="center">
236
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
237
  </div>
238
 
239
  <div align="center">
240
- <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
241
  </div>
242
 
243
- ### One-click reading comprehension and summary generation of Latex papers
 
244
  <div align="center">
245
- <img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
246
  </div>
247
 
248
- ### Automatic report generation
249
  <div align="center">
250
- <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
251
- <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
252
- <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
253
  </div>
254
 
255
- ### Modular functional design
256
  <div align="center">
257
- <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
258
- <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
259
  </div>
260
 
261
- ### Source code translation to English
 
 
 
262
 
 
263
  <div align="center">
264
- <img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
265
  </div>
266
 
267
- ## Todo and version planning:
268
- - version 3.2+ (todo): Function plugin supports more parameter interfaces
269
- - version 3.1: Support for inquiring multiple GPT models at the same time! Support for api2d, support for multiple apikeys load balancing
270
- - version 3.0: Support for chatglm and other small llms
271
- - version 2.6: Refactored the plugin structure, improved interactivity, added more plugins
272
- - version 2.5: Self-updating, solves the problem of text being too long and token overflowing when summarizing large project source code
273
- - version 2.4: (1) Added PDF full text translation function; (2) Added function to switch input area position; (3) Added vertical layout option; (4) Multi-threaded function plugin optimization.
274
- - version 2.3: Enhanced multi-threaded interactivity
275
- - version 2.2: Function plugin supports hot reloading
276
- - version 2.1: Foldable layout
277
- - version 2.0: Introduction of modular function plugins
278
- - version 1.0: Basic functions
279
-
280
- ## Reference and learning
 
 
 
 
 
 
 
 
 
 
281
 
282
  ```
283
- The code design of this project has referenced many other excellent projects, including:
 
 
 
284
 
285
- # Reference project 1: Borrowed many tips from ChuanhuChatGPT
 
 
 
 
 
 
286
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
287
 
288
- # Reference project 2: Tsinghua ChatGLM-6B:
289
- https://github.com/THUDM/ChatGLM-6B
290
- ```
291
 
 
 
 
 
 
2
  >
3
  > This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
4
  >
5
+ > When installing dependencies, **please strictly select the versions** specified in requirements.txt.
6
+ >
7
+ > `pip install -r requirements.txt`
8
 
9
+ # GPT Academic Optimization (GPT Academic)
10
 
11
+ **If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request.
12
+ To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).**
13
 
14
+ > Note:
 
 
 
 
15
  >
16
+ > 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**!
17
+ > 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation).
18
+ > 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect.
19
 
20
+ <div align="center">Function | Description
 
 
 
21
  --- | ---
22
+ One-click polishing | Supports one-click polishing and one-click searching for grammar errors in papers.
23
+ One-click Chinese-English translation | One-click Chinese-English translation.
24
+ One-click code interpretation | Displays, explains, generates, and adds comments to code.
25
+ [Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
26
+ Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
27
+ [Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project
28
+ [Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/...
29
+ Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts.
30
+ Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers.
31
+ Batch annotation generation | [Function plug-in] One-click batch generation of function annotations.
32
+ Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) in the five languages above?
33
+ Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running.
34
+ [PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded)
35
+ [Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click.
36
+ [Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function plug-in] Given any Google Scholar search page URL, let GPT help you [write relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
37
+ Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated.
38
+ Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting.
39
+ Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click.
40
+ Start Dark Gradio [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme.
41
+ [Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right?
42
+ More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/)
43
+ More new feature displays (image generation, etc.)…… | See the end of this document for more...
44
+
45
+ - New interface (modify the LAYOUT option in `config.py` to switch between "left and right layout" and "up and down layout")
 
 
46
  <div align="center">
47
  <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
48
+ </div>- All buttons are dynamically generated by reading `functional.py`, and you can add custom functions freely to unleash the power of clipboard.
 
 
 
49
  <div align="center">
50
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
51
  </div>
52
 
53
+ - polishing/correction
54
  <div align="center">
55
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
56
  </div>
57
 
58
+ - If the output contains formulas, they will be displayed in both `tex` and render form, making it easy to copy and read.
59
  <div align="center">
60
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
61
  </div>
62
 
63
+ - Tired of reading the project code? ChatGPT can explain it all.
64
  <div align="center">
65
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
66
  </div>
67
 
68
+ - Multiple large language models are mixed, such as ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4.
69
  <div align="center">
70
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
71
  </div>
72
 
 
 
 
73
  ---
74
+ # Installation
75
+ ## Method 1: Directly running (Windows, Linux or MacOS)
76
 
77
+ 1. Download the project
 
 
78
  ```sh
79
  git clone https://github.com/binary-husky/chatgpt_academic.git
80
  cd chatgpt_academic
81
  ```
82
 
83
+ 2. Configure the API_KEY
84
 
85
+ Configure the API KEY in `config.py`, [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1).
86
 
87
+ (P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py` and use the configurations in it to override the same configurations in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configurations in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your private information more secure. P.S. The project also supports configuring most options through `environment variables`. Please refer to the format of `docker-compose` file when writing. Reading priority: `environment variables` > `config_private.py` > `config.py`)
 
 
 
 
 
 
88
 
89
 
90
+ 3. Install the dependencies
91
  ```sh
92
+ # (Option I: If familiar with python) (python version 3.9 or above, the newer the better), note: use official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
93
+ python -m pip install -r requirements.txt
 
 
 
 
 
94
 
95
+ # (Option II: If not familiar with python) Use anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
96
+ conda create -n gptac_venv python=3.11 # create anaconda environment
97
+ conda activate gptac_venv # activate anaconda environment
98
+ python -m pip install -r requirements.txt # this step is the same as pip installation
99
  ```
100
 
101
+ <details><summary>If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand</summary>
102
+ <p>
103
+
104
+ [Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (prerequisites: familiar with Python + used Pytorch + computer configuration is strong enough):
105
  ```sh
106
+ # [Optional Step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: if you encounter the "Call ChatGLM fail cannot load ChatGLM parameters" error, refer to this: 1: The default installation above is torch + cpu version, to use cuda, you need to uninstall torch and reinstall torch + cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code = True)
107
+ python -m pip install -r request_llm/requirements_chatglm.txt
108
+
109
+ # [Optional Step II] Support Fudan MOSS
110
+ python -m pip install -r request_llm/requirements_moss.txt
111
+ git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the root directory of the project
112
+
113
+ # [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. Currently supported models are as follows (the jittorllms series only supports the docker solution for the time being):
114
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
115
  ```
116
 
117
+ </p>
118
+ </details>
119
+
120
+
121
+
122
+ 4. Run it
123
  ```sh
124
  python main.py
125
+ ```5. Test Function Plugin
126
  ```
127
+ - Test function plugin template function (ask GPT what happened today in history), based on which you can implement more complex functions as a template
 
 
 
 
 
 
 
128
  Click "[Function Plugin Template Demo] Today in History"
 
129
  ```
130
 
131
+ ## Installation - Method 2: Using Docker
132
+
133
+ 1. ChatGPT Only (Recommended for Most People)
134
 
 
135
  ``` sh
136
+ git clone https://github.com/binary-husky/chatgpt_academic.git # Download project
137
+ cd chatgpt_academic # Enter path
138
+ nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc.
139
+ docker build -t gpt-academic . # Install
140
+
141
+ #(Last step - option 1) In a Linux environment, use `--net=host` for convenience and speed.
 
 
142
  docker run --rm -it --net=host gpt-academic
143
+ #(Last step - option 2) On macOS/windows environment, only -p option can be used to expose the container's port (e.g. 50923) to the port of the main machine.
144
+ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
145
+ ```
146
 
147
+ 2. ChatGPT + ChatGLM + MOSS (Requires Docker Knowledge)
 
 
 
 
 
 
148
 
149
+ ``` sh
150
+ # Modify docker-compose.yml, delete Plan 1 and Plan 3, and keep Plan 2. Modify the configuration of Plan 2 in docker-compose.yml, refer to the comments in it for configuration.
151
+ docker-compose up
152
  ```
153
 
154
+ 3. ChatGPT + LLAMA + Pangu + RWKV (Requires Docker Knowledge)
155
 
156
  ``` sh
157
+ # Modify docker-compose.yml, delete Plan 1 and Plan 2, and keep Plan 3. Modify the configuration of Plan 3 in docker-compose.yml, refer to the comments in it for configuration.
158
+ docker-compose up
 
 
 
 
 
 
159
  ```
160
 
161
+ ## Installation - Method 3: Other Deployment Options
162
 
163
+ 1. How to Use Reverse Proxy URL/Microsoft Cloud Azure API
164
+ Configure API_URL_REDIRECT according to the instructions in 'config.py'.
165
 
166
+ 2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers)
167
+ Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
168
 
169
+ 3. Using WSL2 (Windows Subsystem for Linux)
170
  Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
171
 
172
+ 4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`)
173
+ Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
174
 
175
+ 5. Using docker-compose to Run
176
+ Read the docker-compose.yml and follow the prompts.
 
 
 
 
177
 
178
  ---
179
+ # Advanced Usage
180
+ ## Custom New Shortcut Buttons / Custom Function Plugins
181
 
182
+ 1. Custom New Shortcut Buttons (Academic Hotkey)
183
+ Open `core_functional.py` with any text editor, add an entry as follows and restart the program. (If the button has been successfully added and is visible, the prefix and suffix can be hot-modified without having to restart the program.)
184
+ For example,
185
  ```
186
+ "Super English-to-Chinese": {
187
+ # Prefix, which will be added before your input. For example, used to describe your requests, such as translation, code explanation, polishing, etc.
188
+ "Prefix": "Please translate the following content into Chinese and then use a markdown table to explain the proprietary terms that appear in the text:\n\n",
189
+
190
+ # Suffix, which is added after your input. For example, with the prefix, your input content can be surrounded by quotes.
191
  "Suffix": "",
192
  },
193
  ```
 
195
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
196
  </div>
197
 
198
+ 2. Custom Function Plugins
199
 
200
+ Write powerful function plugins to perform any task you can think of, even those you cannot think of.
201
+ The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide.
202
+ For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
203
 
204
+ ---
205
+ # Latest Update
206
+ ## New Feature Dynamics
207
+ 1. Conversation saving function. Call `Save current conversation` in the function plugin area to save the current conversation as a readable and recoverable HTML file. In addition, call `Load conversation history archive` in the function plugin area (dropdown menu) to restore previous sessions. Tip: Clicking `Load conversation history archive` without specifying a file will display the cached history of HTML archives, and clicking `Delete all local conversation history` will delete all HTML archive caches.
208
 
209
+ <div align="center">
210
+ <img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
211
+ </div>
212
 
213
 
214
+ 2. Report generation. Most plugins will generate work reports after execution.
215
 
216
  <div align="center">
217
+ <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
218
+ <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
219
+ <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
220
  </div>
221
 
222
+
223
+ 3. Modular function design with simple interfaces that support powerful functions.
224
 
225
  <div align="center">
226
+ <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
227
+ <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
228
  </div>
229
 
230
+
231
+ 4. This is an open-source project that can "self-translate".
232
+
233
  <div align="center">
234
+ <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
235
  </div>
236
 
237
+ 5. Translating other open-source projects is a piece of cake.
238
+
239
  <div align="center">
240
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
241
  </div>
242
 
243
  <div align="center">
244
+ <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
245
  </div>
246
 
247
+ 6. A small feature decorated with [live2d](https://github.com/fghrsh/live2d_demo) (disabled by default, need to modify `config.py`).
248
+
249
  <div align="center">
250
+ <img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
251
  </div>
252
 
253
+ 7. Added MOSS large language model support.
254
  <div align="center">
255
+ <img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
 
 
256
  </div>
257
 
258
+ 8. OpenAI image generation.
259
  <div align="center">
260
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
 
261
  </div>
262
 
263
+ 9. OpenAI audio parsing and summarization.
264
+ <div align="center">
265
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
266
+ </div>
267
 
268
+ 10. Full-text proofreading and error correction of LaTeX.
269
  <div align="center">
270
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
271
  </div>
272
 
273
+
274
+ ## Versions:
275
+ - version 3.5(Todo): Use natural language to call all function plugins of this project (high priority).
276
+ - version 3.4(Todo): Improve multi-threading support for chatglm local large models.
277
+ - version 3.3: +Internet information integration function.
278
+ - version 3.2: Function plugin supports more parameter interfaces (save conversation function, interpretation of any language code + simultaneous inquiry of any LLM combination).
279
+ - version 3.1: Support simultaneous inquiry of multiple GPT models! Support api2d, and support load balancing of multiple apikeys.
280
+ - version 3.0: Support chatglm and other small LLM models.
281
+ - version 2.6: Refactored plugin structure, improved interactivity, and added more plugins.
282
+ - version 2.5: Self-updating, solving the problem of text overflow and token overflow when summarizing large engineering source codes.
283
+ - version 2.4: (1) Added PDF full-text translation function; (2) Added the function of switching the position of the input area; (3) Added vertical layout option; (4) Optimized multi-threading function plugins.
284
+ - version 2.3: Enhanced multi-threading interactivity.
285
+ - version 2.2: Function plugin supports hot reloading.
286
+ - version 2.1: Collapsible layout.
287
+ - version 2.0: Introduction of modular function plugins.
288
+ - version 1.0: Basic functions.
289
+
290
+ gpt_academic Developer QQ Group-2: 610599535
291
+
292
+ - Known Issues
293
+ - Some browser translation plugins interfere with the front-end operation of this software.
294
+ - Both high and low versions of gradio can lead to various exceptions.
295
+
296
+ ## Reference and Learning
297
 
298
  ```
299
+ Many other excellent designs have been referenced in the code, mainly including:
300
+
301
+ # Project 1: THU ChatGLM-6B:
302
+ https://github.com/THUDM/ChatGLM-6B
303
 
304
+ # Project 2: THU JittorLLMs:
305
+ https://github.com/Jittor/JittorLLMs
306
+
307
+ # Project 3: Edge-GPT:
308
+ https://github.com/acheong08/EdgeGPT
309
+
310
+ # Project 4: ChuanhuChatGPT:
311
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
312
 
313
+ # Project 5: ChatPaper:
314
+ https://github.com/kaixindelele/ChatPaper
 
315
 
316
+ # More:
317
+ https://github.com/gradio-app/gradio
318
+ https://github.com/fghrsh/live2d_demo
319
+ ```
docs/README_FR.md CHANGED
@@ -2,295 +2,320 @@
2
  >
3
  > Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%.
4
  >
 
 
 
 
5
 
6
- # <img src="logo.png" width="40" > ChatGPT Optimisation Académique
7
 
8
- **Si vous aimez ce projet, donnez-lui une étoile; si vous avez inventé des raccourcis académiques plus utiles ou des plugins fonctionnels, n'hésitez pas à ouvrir une demande ou une demande de traction. Nous avons également un fichier README en [anglais|](docs/README_EN.md)[japonais|](docs/README_JP.md)[russe|](docs/README_RS.md)[français](docs/README_FR.md) traduit par ce projet lui-même.**
 
9
 
10
  > **Note**
11
  >
12
- > 1. Veuillez noter que seuls les plugins de fonction signalés en **rouge** sont capables de lire les fichiers, certains plugins se trouvent dans le **menu déroulant** de la section plugin. Nous sommes également les bienvenus avec la plus haute priorité pour traiter et accepter tout nouveau PR de plugin!
13
  >
14
- > 2. Chaque fichier dans ce projet est expliqué en détail dans l'auto-analyse [self_analysis.md](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins fonctionnels pertinents pour appeler GPT et générer un rapport d'auto-analyse projet mis à jour. Les questions fréquemment posées sont résumées dans le [wiki](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98).
15
- >
16
-
17
- <div align="center">
18
 
19
- Fonctionnalité | Description
20
  --- | ---
21
- Polissage en un clic | Prend en charge la correction en un clic et la recherche d'erreurs de syntaxe dans les documents de recherche.
22
- Traduction Chinois-Anglais en un clic | Une touche pour traduire la partie chinoise en anglais ou celle anglaise en chinois.
23
- Explication de code en un clic | Affiche et explique correctement le code.
24
- [Raccourcis clavier personnalisables](https://www.bilibili.com/video/BV14s4y1E7jN) | Prend en charge les raccourcis clavier personnalisables.
25
- [Configuration du serveur proxy](https://www.bilibili.com/video/BV1rc411W7Dr) | Prend en charge la configuration du serveur proxy.
26
- Conception modulaire | Prend en charge la personnalisation des plugins de fonctions et des [plugins] de fonctions hiérarchiques personnalisés, et les plugins prennent en charge [la mise à jour à chaud](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
27
- [Auto-analyse du programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] [Lire en un clic](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) le code source de ce projet.
28
- [Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] En un clic, les projets Python/C/C++/Java/Lua/... peuvent être analysés.
29
- Lire le document de recherche | [Plugins] Lisez le résumé de l'article en latex et générer un résumé.
30
- Traduction et polissage de l'article complet en LaTeX | [Plugins] Une touche pour traduire ou corriger en LaTeX
31
- Génération Commentaire de fonction en vrac | [Plugins] Lisez en un clic les fonctions et générez des commentaires de fonction.
32
- Rapport d'analyse automatique des chats générés | [Plugins] Génère un rapport de synthèse après l'exécution.
33
- [Assistant arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugins] Entrez l'url de l'article arxiv pour traduire le résumé + télécharger le PDF en un clic
34
- [Traduction complète des articles PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugins] Extraire le titre et le résumé de l'article PDF + Traduire le texte entier (multithread)
35
- [Aide à la recherche Google Academ](https://www.bilibili.com/video/BV19L411U7ia) | [Plugins] Donnez à GPT l'URL de n'importe quelle page de recherche Google Academ pour vous aider à sélectionner des articles intéressants
36
- Affichage de formules/images/tableaux | Afficher la forme traduite et rendue d'une formule en même temps, plusieurs formules et surlignage du code prend en charge
37
- Prise en charge des plugins multithread | Prise en charge de l'appel multithread de chatgpt, traitement en masse de texte ou de programmes en un clic
38
- Activer le thème Gradio sombre [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) au démarrage | Ajoutez ```/?__dark-theme=true``` à l'URL du navigateur pour basculer vers le thème sombre
39
- [Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [prise en charge de l'interface API2D](https://api2d.com/) | Comment cela serait-il de se faire servir par GPT3.5, GPT4 et la [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B) en même temps?
40
- Expérience en ligne d'huggingface sans science | Après vous être connecté à huggingface, copiez [cet espace](https://huggingface.co/spaces/qingxu98/gpt-academic)
41
- ... | ...
 
42
 
43
  </div>
44
 
45
 
46
- Vous êtes un traducteur professionnel d'articles universitaires en français.
47
-
48
- Ceci est un fichier Markdown, veuillez le traduire en français sans modifier les commandes Markdown existantes :
49
-
50
- - Nouvelle interface (modifiable en modifiant l'option de mise en page dans config.py pour basculer entre les mises en page gauche-droite et haut-bas)
51
  <div align="center">
52
  <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
53
- </div>
54
-
55
-
56
- - Tous les boutons sont générés dynamiquement en lisant functional.py, les utilisateurs peuvent ajouter librement des fonctions personnalisées pour libérer le presse-papiers.
57
  <div align="center">
58
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
59
  </div>
60
 
61
- - Correction/amélioration
62
  <div align="center">
63
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
64
  </div>
65
 
66
- - Si la sortie contient des formules, elles seront affichées simultanément sous forme de de texte brut et de forme rendue pour faciliter la copie et la lecture.
67
  <div align="center">
68
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
69
  </div>
70
 
71
- - Pas envie de lire le code du projet ? Faites votre propre démo avec ChatGPT.
72
  <div align="center">
73
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
74
  </div>
75
 
76
- - Utilisation combinée de plusieurs modèles de langage sophistiqués (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
77
  <div align="center">
78
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
79
  </div>
80
 
81
- Utilisation combinée de plusieurs modèles de langage sophistiqués en version de test [huggingface](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (la version huggingface ne prend pas en charge Chatglm).
82
-
83
-
84
  ---
 
 
85
 
86
- ## Installation - Méthode 1 : Exécution directe (Windows, Linux or MacOS)
87
-
88
- 1. Téléchargez le projet
89
  ```sh
90
  git clone https://github.com/binary-husky/chatgpt_academic.git
91
  cd chatgpt_academic
92
  ```
93
 
94
- 2. Configuration de l'API_KEY et des paramètres de proxy
95
 
96
- Dans `config.py`, configurez les paramètres de proxy et de clé d'API OpenAI, comme indiqué ci-dessous
97
- ```
98
- 1. Si vous êtes en Chine, vous devez configurer un proxy étranger pour utiliser l'API OpenAI en toute transparence. Pour ce faire, veuillez lire attentivement le fichier config.py (1. Modifiez l'option USE_PROXY ; 2. Modifiez les paramètres de proxies comme indiqué dans les instructions).
99
- 2. Configurez votre clé API OpenAI. Vous devez vous inscrire sur le site web d'OpenAI pour obtenir une clé API. Une fois que vous avez votre clé API, vous pouvez la configurer dans le fichier config.py.
100
- 3. Tous les problèmes liés aux réseaux de proxy (temps d'attente, non-fonctionnement des proxies) sont résumés dans https://github.com/binary-husky/chatgpt_academic/issues/1.
101
- ```
102
- (Remarque : le programme vérifie d'abord s'il existe un fichier de configuration privé nommé `config_private.py`, et utilise les configurations de celui-ci à la place de celles du fichier `config.py`. Par conséquent, si vous comprenez notre logique de lecture de configuration, nous vous recommandons fortement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de celui-ci dans `config_private.py`. `config_private.py` n'est pas contrôlé par git et rend vos informations personnelles plus sûres.)
103
 
104
- 3. Installation des dépendances
105
- ```sh
106
- # (Option 1) Recommandé
107
- python -m pip install -r requirements.txt
108
 
109
- # (Option 2) Si vous utilisez anaconda, les étapes sont similaires :
110
- # (Option 2.1) conda create -n gptac_venv python=3.11
111
- # (Option 2.2) conda activate gptac_venv
112
- # (Option 2.3) python -m pip install -r requirements.txt
113
 
114
- # note : Utilisez la source pip officielle ou la source pip Alibaba. D'autres sources (comme celles des universités) pourraient poser problème. Pour utiliser temporairement une autre source, utilisez :
115
- # python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
 
 
 
 
 
 
 
116
  ```
117
 
118
- Si vous avez besoin de soutenir ChatGLM de Tsinghua, vous devez installer plus de dépendances (si vous n'êtes pas familier avec Python ou que votre ordinateur n'est pas assez performant, nous vous recommandons de ne pas essayer) :
 
 
 
119
  ```sh
120
- python -m pip install -r request_llm/requirements_chatglm.txt
 
 
 
 
 
 
 
 
121
  ```
122
 
 
 
 
 
 
123
  4. Exécution
124
  ```sh
125
  python main.py
 
126
  ```
127
-
128
- 5. Tester les plugins de fonctions
129
- ```
130
- - Test Python Project Analysis
131
- Dans la zone de saisie, entrez `./crazy_functions/test_project/python/dqn`, puis cliquez sur "Parse Entire Python Project"
132
- - Test d'auto-lecture du code
133
- Cliquez sur "[Démo multi-thread] Parser ce projet lui-même (auto-traduction de la source)"
134
- - Test du modèle de fonctionnalité expérimentale (exige une réponse de l'IA à ce qui est arrivé aujourd'hui dans l'histoire). Vous pouvez utiliser cette fonctionnalité comme modèle pour des fonctions plus complexes.
135
- Cliquez sur "[Démo modèle de plugin de fonction] Histoire du Jour"
136
- - Le menu déroulant de la zone de plugin de fonctionnalité contient plus de fonctionnalités à sélectionner.
137
  ```
138
 
139
- ## Installation - Méthode 2 : Utilisation de docker (Linux)
140
 
 
141
 
142
- Vous êtes un traducteur professionnel d'articles académiques en français.
143
-
144
- 1. ChatGPT seul (recommandé pour la plupart des gens)
145
  ``` sh
146
- # Télécharger le projet
147
- git clone https://github.com/binary-husky/chatgpt_academic.git
148
- cd chatgpt_academic
149
- # Configurer le proxy outre-mer et la clé API OpenAI
150
- Modifier le fichier config.py avec n'importe quel éditeur de texte
151
- # Installer
152
- docker build -t gpt-academic .
153
- # Exécuter
154
  docker run --rm -it --net=host gpt-academic
 
 
 
155
 
156
- # Tester les modules de fonction
157
- ## Tester la fonction modèle des modules (requiert la réponse de GPT à "qu'est-ce qui s'est passé dans l'histoire aujourd'hui ?"), vous pouvez utiliser cette fonction en tant que modèle pour implémenter des fonctions plus complexes.
158
- Cliquez sur "[Exemple de modèle de module] Histoire d'aujourd'hui"
159
- ## Tester le résumé écrit pour le projet LaTeX
160
- Dans la zone de saisie, tapez ./crazy_functions/test_project/latex/attention, puis cliquez sur "Lire le résumé de l'article de recherche LaTeX"
161
- ## Tester l'analyse du projet Python
162
- Dans la zone de saisie, tapez ./crazy_functions/test_project/python/dqn, puis cliquez sur "Analyser l'ensemble du projet Python"
163
 
164
- D'autres fonctions sont disponibles dans la liste déroulante des modules de fonction.
 
 
165
  ```
166
 
167
- 2. ChatGPT+ChatGLM (nécessite une grande connaissance de docker et une configuration informatique suffisamment puissante)
168
  ``` sh
169
- # Modifier le dockerfile
170
- cd docs && nano Dockerfile+ChatGLM
171
- # Comment construire | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
172
- docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
173
- # Comment exécuter | 如何运行 (1) Directement exécuter :
174
- docker run --rm -it --net=host --gpus=all gpt-academic
175
- # Comment exécuter | 如何运行 (2) Je veux effectuer quelques ajustements dans le conteneur avant de lancer :
176
- docker run --rm -it --net=host --gpus=all gpt-academic bash
177
  ```
178
 
179
- ## Installation - Méthode 3 : Autres méthodes de déploiement
180
 
181
- 1. Déploiement sur un cloud serveur distant
182
- Veuillez consulter le [wiki de déploiement-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
183
 
184
- 2. Utilisation de WSL2 (Windows Subsystem for Linux)
185
- Veuillez consulter le [wiki de déploiement-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
186
 
 
 
187
 
188
- ## Configuration de la procuration de l'installation
189
- ### Méthode 1 : Méthode conventionnelle
190
- [Configuration de la procuration](https://github.com/binary-husky/chatgpt_academic/issues/1)
191
 
192
- ### Méthode 2 : Tutoriel pour débutant pur
193
- [Tutoriel pour débutant pur](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
194
 
 
 
195
 
196
- ---
 
197
 
198
- ## Personnalisation des nouveaux boutons pratiques (personnalisation des raccourcis académiques)
199
- Ouvrez le fichier `core_functional.py` avec n'importe quel éditeur de texte, ajoutez les éléments suivants, puis redémarrez le programme. (Si le bouton a déjà été ajouté avec succès et est visible, le préfixe et le suffixe pris en charge peuvent être modifiés à chaud sans avoir besoin de redémarrer le programme.)
200
- Par exemple:
201
  ```
202
- "Traduction Français-Chinois": {
203
- # Préfixe, qui sera ajouté avant votre saisie. Par exemple, pour décrire votre demande, telle que la traduction, le débogage de code, l'amélioration, etc.
204
- "Prefix": "Veuillez traduire le contenu ci-dessous en chinois, puis expliquer chaque terme propre mentionné dans un tableau Markdown :\n\n",
205
 
206
- # Suffixe, qui sera ajouté après votre saisie. Par exemple, en combinaison avec un préfixe, vous pouvez mettre le contenu de votre saisie entre guillemets.
207
  "Suffix": "",
208
  },
209
  ```
210
-
211
  <div align="center">
212
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
213
  </div>
214
 
215
- ---
216
 
 
 
 
217
 
218
- ## Présentation de certaines fonctionnalités
 
 
 
219
 
220
- ### Affichage des images:
 
221
 
222
  <div align="center">
223
- <img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
224
  </div>
225
 
226
 
227
- ### Si un programme peut comprendre et décomposer lui-même :
228
 
 
229
  <div align="center">
230
- <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
 
 
231
  </div>
232
 
 
233
  <div align="center">
234
- <img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
 
235
  </div>
236
 
237
-
238
- ### Analyse de tout projet Python/Cpp quelconque :
239
  <div align="center">
240
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
241
  </div>
242
 
 
243
  <div align="center">
244
- <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
245
  </div>
246
 
247
- ### Lecture et résumé générés automatiquement pour les articles en Latex
248
  <div align="center">
249
- <img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
250
  </div>
251
 
252
- ### Génération de rapports automatique
253
  <div align="center">
254
- <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
255
- <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
256
- <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
257
  </div>
258
 
259
- ### Conception de fonctionnalités modulaires
260
  <div align="center">
261
- <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
262
- <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
263
  </div>
264
 
 
 
 
 
265
 
266
- ### Traduction de code source en anglais
 
 
 
267
 
 
268
  <div align="center">
269
- <img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
270
  </div>
271
 
272
- ## À faire et planification de version :
273
- - version 3.2+ (à faire) : Prise en charge de plus de paramètres d'interface de plugin de fonction
274
- - version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Prise en charge de l'API2d, prise en charge de la répartition de charge de plusieurs clés API
275
- - version 3.0 : Prise en charge de chatglm et d'autres petits llm
276
- - version 2.6 : Réorganisation de la structure du plugin, amélioration de l'interactivité, ajout de plus de plugins
277
- - version 2.5 : Mise à jour automatique, résolution du problème de dépassement de jeton et de texte trop long lors de la compilation du code source complet
278
- - version 2.4 : (1) Ajout de la fonctionnalité de traduction intégrale de PDF ; (2) Ajout d'une fonctionnalité de changement de position de zone de saisie ; (3) Ajout d'une option de disposition verticale ; (4) Optimisation du plugin de fonction multi-thread.
279
- - version 2.3 : Amélioration de l'interactivité multi-thread
280
- - version 2.2 : Prise en charge du rechargement à chaud du plugin de fonction
281
- - version 2.1 : Mise en page pliable
282
- - version 2.0 : Introduction du plugin de fonction modulaire
283
- - version 1.0 : Fonctionnalité de base
284
-
285
- ## Références et apprentissage
 
 
 
 
 
 
 
 
 
 
286
 
287
  ```
288
- De nombreux designs d'autres projets exceptionnels ont été utilisés pour référence dans le code, notamment :
 
 
 
 
 
 
289
 
290
- # Projet 1 : De nombreuses astuces ont été empruntées à ChuanhuChatGPT
 
 
 
291
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
292
 
293
- # Projet 2 : ChatGLM-6B de Tsinghua :
294
- https://github.com/THUDM/ChatGLM-6B
295
- ```
296
 
 
 
 
 
 
2
  >
3
  > Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%.
4
  >
5
+ > During installation, please strictly select the versions **specified** in requirements.txt.
6
+ >
7
+ > `pip install -r requirements.txt`
8
+ >
9
 
10
+ # <img src="logo.png" width="40" > Optimisation académique GPT (GPT Academic)
11
 
12
+ **Si vous aimez ce projet, veuillez lui donner une étoile. Si vous avez trouvé des raccourcis académiques ou des plugins fonctionnels plus utiles, n'hésitez pas à ouvrir une demande ou une pull request.
13
+ Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez [`multi_language.py`](multi_language.py) (expérimental).
14
 
15
  > **Note**
16
  >
17
+ > 1. Veuillez noter que seuls les plugins de fonctions (boutons) **en rouge** prennent en charge la lecture de fichiers. Certains plugins se trouvent dans le **menu déroulant** de la zone de plugins. De plus, nous accueillons et traitons les nouvelles pull requests pour les plugins avec **la plus haute priorité**!
18
  >
19
+ > 2. Les fonctions de chaque fichier de ce projet sont expliquées en détail dans l'auto-analyse [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins de fonctions pertinents et appeler GPT pour régénérer le rapport d'auto-analyse du projet à tout moment. Les FAQ sont résumées dans [le wiki](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Méthode d'installation](#installation).
20
+ >
21
+ > 3. Ce projet est compatible avec et encourage l'utilisation de grands modèles de langage nationaux tels que chatglm, RWKV, Pangu, etc. La coexistence de plusieurs clés API est prise en charge et peut être remplie dans le fichier de configuration, tel que `API_KEY="openai-key1,openai-key2,api2d-key3"`. Lorsque vous souhaitez remplacer temporairement `API_KEY`, saisissez temporairement `API_KEY` dans la zone de saisie, puis appuyez sur Entrée pour soumettre et activer.
 
22
 
23
+ <div align="center">Functionnalité | Description
24
  --- | ---
25
+ Révision en un clic | prend en charge la révision en un clic et la recherche d'erreurs de syntaxe dans les articles
26
+ Traduction chinois-anglais en un clic | Traduction chinois-anglais en un clic
27
+ Explication de code en un clic | Affichage, explication, génération et ajout de commentaires de code
28
+ [Raccourcis personnalisés](https://www.bilibili.com/video/BV14s4y1E7jN) | prend en charge les raccourcis personnalisés
29
+ Conception modulaire | prend en charge de puissants plugins de fonction personnalisée, les plugins prennent en charge la [mise à jour à chaud](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
30
+ [Autoscanner](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] [Compréhension instantanée](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) du code source de ce projet
31
+ [Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] Analyse en un clic de la structure d'autres projets Python / C / C ++ / Java / Lua / ...
32
+ Lecture d'articles, [traduction](https://www.bilibili.com/video/BV1KT411x7Wn) d'articles | [Plug-in de fonction] Compréhension instantanée de l'article latex / pdf complet et génération de résumés
33
+ [Traduction](https://www.bilibili.com/video/BV1nk4y1Y7Js/) et [révision](https://www.bilibili.com/video/BV1FT411H7c5/) complets en latex | [Plug-in de fonction] traduction ou révision en un clic d'articles en latex
34
+ Génération de commentaires en masse | [Plug-in de fonction] Génération en un clic de commentaires de fonction en masse
35
+ Traduction [chinois-anglais](https://www.bilibili.com/video/BV1yo4y157jV/) en Markdown | [Plug-in de fonction] avez-vous vu la [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) pour les 5 langues ci-dessus?
36
+ Génération de rapports d'analyse de chat | [Plug-in de fonction] Génère automatiquement un rapport de résumé après l'exécution
37
+ [Traduction intégrale en pdf](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plug-in de fonction] Extraction de titre et de résumé de l'article pdf + traduction intégrale (multi-thread)
38
+ [Aide à arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plug-in de fonction] Entrer l'url de l'article arxiv pour traduire et télécharger le résumé en un clic
39
+ [Aide à la recherche Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Plug-in de fonction] Donnez l'URL de la page de recherche Google Scholar, laissez GPT vous aider à [écrire des ouvrages connexes](https://www.bilibili.com/video/BV1GP411U7Az/)
40
+ Aggrégation d'informations en ligne et GPT | [Plug-in de fonction] Permet à GPT de [récupérer des informations en ligne](https://www.bilibili.com/video/BV1om4y127ck), puis de répondre aux questions, afin que les informations ne soient jamais obsolètes
41
+ Affichage d'équations / images / tableaux | Fournit un affichage simultané de [la forme tex et de la forme rendue](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), prend en charge les formules mathématiques et la coloration syntaxique du code
42
+ Prise en charge des plugins à plusieurs threads | prend en charge l'appel multithread de chatgpt, un clic pour traiter [un grand nombre d'articles](https://www.bilibili.com/video/BV1FT411H7c5/) ou de programmes
43
+ Thème gradio sombre en option de démarrage | Ajoutez```/?__theme=dark``` à la fin de l'URL du navigateur pour basculer vers le thème sombre
44
+ [Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Sera probablement très agréable d'être servi simultanément par GPT3.5, GPT4, [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B), [MOSS de Fudan](https://github.com/OpenLMLab/MOSS)
45
+ Plus de modèles LLM, déploiement de [huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Ajout prise en charge de l'interface Newbing (nouvelle bing), introduction du support de [Jittorllms de Tsinghua](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) et [Panguα](https://openi.org.cn/pangu/)
46
+ Plus de nouvelles fonctionnalités (génération d'images, etc.) ... | Voir la fin de ce document pour plus de détails ...
47
 
48
  </div>
49
 
50
 
51
+ - Nouvelle interface (modifier l'option LAYOUT de `config.py` pour passer d'une disposition ``gauche-droite`` à une disposition ``haut-bas``)
 
 
 
 
52
  <div align="center">
53
  <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
54
+ </div>- Tous les boutons sont générés dynamiquement en lisant functional.py et peuvent être facilement personnalisés pour ajouter des fonctionnalités personnalisées, ce qui facilite l'utilisation du presse-papiers.
 
 
 
55
  <div align="center">
56
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
57
  </div>
58
 
59
+ - Correction d'erreurs/lissage du texte.
60
  <div align="center">
61
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
62
  </div>
63
 
64
+ - Si la sortie contient des équations, elles sont affichées à la fois sous forme de tex et sous forme rendue pour faciliter la lecture et la copie.
65
  <div align="center">
66
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
67
  </div>
68
 
69
+ - Pas envie de lire les codes de ce projet? Tout le projet est directement exposé par ChatGPT.
70
  <div align="center">
71
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
72
  </div>
73
 
74
+ - Appel à une variété de modèles de langage de grande envergure (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/)-GPT4).
75
  <div align="center">
76
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
77
  </div>
78
 
 
 
 
79
  ---
80
+ # Installation
81
+ ## Installation-Method 1: running directly (Windows, Linux or MacOS)
82
 
83
+ 1. Télécharger le projet
 
 
84
  ```sh
85
  git clone https://github.com/binary-husky/chatgpt_academic.git
86
  cd chatgpt_academic
87
  ```
88
 
89
+ 2. Configuration de la clé API
90
 
91
+ Dans `config.py`, configurez la clé API et d'autres paramètres. Consultez [Special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
 
 
 
 
 
 
92
 
93
+ (P.S. Lorsque le programme est exécuté, il vérifie en premier s'il existe un fichier de configuration privé nommé `config_private.py` et remplace les paramètres portant le même nom dans `config.py` par les paramètres correspondants dans `config_private.py`. Par conséquent, si vous comprenez la logique de lecture de nos configurations, nous vous recommandons vivement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de `config.py`. `config_private.py` n'est pas contrôlé par Git et peut garantir la sécurité de vos informations privées. P.S. Le projet prend également en charge la configuration de la plupart des options via "variables d'environnement", le format d'écriture des variables d'environnement est référencé dans le fichier `docker-compose`. Priorité de lecture: "variables d'environnement" > `config_private.py` > `config.py`)
 
 
 
94
 
 
 
 
 
95
 
96
+ 3. Installer les dépendances
97
+ ```sh
98
+ # (Option I: python users instalation) (Python version 3.9 or higher, the newer the better). Note: use official pip source or ali pip source. To temporarily change the source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
99
+ python -m pip install -r requirements.txt
100
+
101
+ # (Option II: non-python users instalation) Use Anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
102
+ conda create -n gptac_venv python=3.11 # Create anaconda env
103
+ conda activate gptac_venv # Activate anaconda env
104
+ python -m pip install -r requirements.txt # Same step as pip instalation
105
  ```
106
 
107
+ <details><summary>Cliquez ici pour afficher le texte si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend.</summary>
108
+ <p>
109
+
110
+ 【Optional】 Si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend, des dépendances supplémentaires doivent être installées (prérequis: compétent en Python + utilisez Pytorch + configuration suffisante de l'ordinateur):
111
  ```sh
112
+ # 【Optional Step I】 Support THU ChatGLM. Remarque sur THU ChatGLM: Si vous rencontrez l'erreur "Appel à ChatGLM échoué, les paramètres ChatGLM ne peuvent pas être chargés normalement", reportez-vous à ce qui suit: 1: La version par défaut installée est torch+cpu, si vous souhaitez utiliser cuda, vous devez désinstaller torch et réinstaller torch+cuda; 2: Si le modèle ne peut pas être chargé en raison d'une configuration insuffisante de l'ordinateur local, vous pouvez modifier la précision du modèle dans request_llm/bridge_chatglm.py, modifier AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) par AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
113
+ python -m pip install -r request_llm/requirements_chatglm.txt
114
+
115
+ # 【Optional Step II】 Support FDU MOSS
116
+ python -m pip install -r request_llm/requirements_moss.txt
117
+ git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When running this line of code, you must be in the project root path.
118
+
119
+ # 【Optional Step III】Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the desired model. Currently, all models supported are as follows (the jittorllms series currently only supports the docker scheme):
120
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
121
  ```
122
 
123
+ </p>
124
+ </details>
125
+
126
+
127
+
128
  4. Exécution
129
  ```sh
130
  python main.py
131
+ ```5. Plugin de fonction de test
132
  ```
133
+ - Fonction de modèle de plugin de test (requiert que GPT réponde à ce qui s'est passé dans l'histoire aujourd'hui), vous pouvez utiliser cette fonction comme modèle pour mettre en œuvre des fonctionnalités plus complexes.
134
+ Cliquez sur "[Démo de modèle de plugin de fonction] Aujourd'hui dans l'histoire"
 
 
 
 
 
 
 
 
135
  ```
136
 
137
+ ## Installation - Méthode 2: Utilisation de Docker
138
 
139
+ 1. ChatGPT uniquement (recommandé pour la plupart des gens)
140
 
 
 
 
141
  ``` sh
142
+ git clone https://github.com/binary-husky/chatgpt_academic.git # Télécharger le projet
143
+ cd chatgpt_academic # Accéder au chemin
144
+ nano config.py # Editez config.py avec n'importe quel éditeur de texte en configurant "Proxy", "API_KEY" et "WEB_PORT" (p. ex. 50923)
145
+ docker build -t gpt-academic . # Installer
146
+
147
+ # (Dernière étape - choix1) Dans un environnement Linux, l'utilisation de `--net=host` est plus facile et rapide
 
 
148
  docker run --rm -it --net=host gpt-academic
149
+ # (Dernière étape - choix 2) Dans un environnement macOS/Windows, seule l'option -p permet d'exposer le port du récipient (p.ex. 50923) au port de l'hôte.
150
+ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
151
+ ```
152
 
153
+ 2. ChatGPT + ChatGLM + MOSS (il faut connaître Docker)
 
 
 
 
 
 
154
 
155
+ ``` sh
156
+ # Modifiez docker-compose.yml, supprimez la solution 1 et la solution 3, conservez la solution 2. Modifiez la configuration de la solution 2 dans docker-compose.yml en suivant les commentaires.
157
+ docker-compose up
158
  ```
159
 
160
+ 3. ChatGPT + LLAMA + PanGu + RWKV (il faut connaître Docker)
161
  ``` sh
162
+ # Modifiez docker-compose.yml, supprimez la solution 1 et la solution 2, conservez la solution 3. Modifiez la configuration de la solution 3 dans docker-compose.yml en suivant les commentaires.
163
+ docker-compose up
 
 
 
 
 
 
164
  ```
165
 
 
166
 
167
+ ## Installation - Méthode 3: Autres méthodes de déploiement
 
168
 
169
+ 1. Comment utiliser une URL de proxy inversé / Microsoft Azure Cloud API
170
+ Configurez simplement API_URL_REDIRECT selon les instructions de config.py.
171
 
172
+ 2. Déploiement distant sur un serveur cloud (connaissance et expérience des serveurs cloud requises)
173
+ Veuillez consulter [Wiki de déploiement-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97).
174
 
175
+ 3. Utilisation de WSL2 (sous-système Windows pour Linux)
176
+ Veuillez consulter [Wiki de déploiement-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2).
 
177
 
178
+ 4. Comment exécuter sous un sous-répertoire (tel que `http://localhost/subpath`)
179
+ Veuillez consulter les [instructions d'exécution de FastAPI] (docs/WithFastapi.md).
180
 
181
+ 5. Utilisation de docker-compose
182
+ Veuillez lire docker-compose.yml, puis suivre les instructions fournies.
183
 
184
+ # Utilisation avancée
185
+ ## Personnalisation de nouveaux boutons pratiques / Plugins de fonctions personnalisées
186
 
187
+ 1. Personnalisation de nouveaux boutons pratiques (raccourcis académiques)
188
+ Ouvrez core_functional.py avec n'importe quel éditeur de texte, ajoutez une entrée comme suit, puis redémarrez le programme. (Si le bouton a été ajouté avec succès et est visible, le préfixe et le suffixe prennent en charge les modifications à chaud et ne nécessitent pas le redémarrage du programme pour prendre effet.)
189
+ Par exemple
190
  ```
191
+ "Super coller sens": {
192
+ # Préfixe, sera ajouté avant votre entrée. Par exemple, pour décrire votre demande, telle que traduire, expliquer du code, faire la mise en forme, etc.
193
+ "Prefix": "Veuillez traduire le contenu suivant en chinois, puis expliquer chaque terme proprement nommé qui y apparaît avec un tableau markdown:\n\n",
194
 
195
+ # Suffixe, sera ajouté après votre entrée. Par exemple, en utilisant le préfixe, vous pouvez entourer votre contenu d'entrée de guillemets.
196
  "Suffix": "",
197
  },
198
  ```
 
199
  <div align="center">
200
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
201
  </div>
202
 
203
+ 2. Plugins de fonctions personnalisées
204
 
205
+ Écrivez des plugins de fonctions puissants pour effectuer toutes les tâches que vous souhaitez ou que vous ne pouvez pas imaginer.
206
+ Les plugins de ce projet ont une difficulté de programmation et de débogage très faible. Si vous avez des connaissances de base en Python, vous pouvez simuler la fonctionnalité de votre propre plugin en suivant le modèle que nous avons fourni.
207
+ Veuillez consulter le [Guide du plugin de fonction] (https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) pour plus de détails.
208
 
209
+ ---
210
+ # Latest Update
211
+
212
+ ## Nouvelles fonctionnalités en cours de déploiement.
213
 
214
+ 1. Fonction de sauvegarde de la conversation.
215
+ Appelez simplement "Enregistrer la conversation actuelle" dans la zone de plugin de fonction pour enregistrer la conversation actuelle en tant que fichier html lisible et récupérable. De plus, dans la zone de plugin de fonction (menu déroulant), appelez "Charger une archive de l'historique de la conversation" pour restaurer la conversation précédente. Astuce : cliquer directement sur "Charger une archive de l'historique de la conversation" sans spécifier de fichier permet de consulter le cache d'archive html précédent. Cliquez sur "Supprimer tous les enregistrements locaux de l'historique de la conversation" pour supprimer le cache d'archive html.
216
 
217
  <div align="center">
218
+ <img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
219
  </div>
220
 
221
 
 
222
 
223
+ 2. Générer un rapport. La plupart des plugins génèrent un rapport de travail après l'exécution.
224
  <div align="center">
225
+ <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
226
+ <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
227
+ <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
228
  </div>
229
 
230
+ 3. Conception de fonctionnalités modulaires avec une interface simple mais capable d'une fonctionnalité puissante.
231
  <div align="center">
232
+ <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
233
+ <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
234
  </div>
235
 
236
+ 4. C'est un projet open source qui peut "se traduire de lui-même".
 
237
  <div align="center">
238
+ <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
239
  </div>
240
 
241
+ 5. Traduire d'autres projets open source n'est pas un problème.
242
  <div align="center">
243
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
244
  </div>
245
 
 
246
  <div align="center">
247
+ <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
248
  </div>
249
 
250
+ 6. Fonction de décoration de live2d (désactivée par défaut, nécessite une modification de config.py).
251
  <div align="center">
252
+ <img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
 
 
253
  </div>
254
 
255
+ 7. Prise en charge du modèle de langue MOSS.
256
  <div align="center">
257
+ <img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
 
258
  </div>
259
 
260
+ 8. Génération d'images OpenAI.
261
+ <div align="center">
262
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
263
+ </div>
264
 
265
+ 9. Analyse et synthèse vocales OpenAI.
266
+ <div align="center">
267
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
268
+ </div>
269
 
270
+ 10. Correction de la totalité des erreurs de Latex.
271
  <div align="center">
272
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
273
  </div>
274
 
275
+
276
+ ## Versions :
277
+ - version 3.5 faire) : appel de toutes les fonctions de plugin de ce projet en langage naturel (priorité élevée)
278
+ - version 3.4 (À faire) : amélioration du support multi-thread de chatglm en local
279
+ - version 3.3 : Fonctionnalité intégrée d'informations d'internet
280
+ - version 3.2 : La fonction du plugin de fonction prend désormais en charge des interfaces de paramètres plus nombreuses (fonction de sauvegarde, décodage de n'importe quel langage de code + interrogation simultanée de n'importe quelle combinaison de LLM)
281
+ - version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Support api2d, équilibrage de charge multi-clé api.
282
+ - version 3.0 : Prise en charge de chatglm et autres LLM de petite taille.
283
+ - version 2.6 : Refonte de la structure des plugins, amélioration de l'interactivité, ajout de plus de plugins.
284
+ - version 2.5 : Auto-mise à jour, résolution des problèmes de texte trop long et de dépassement de jetons lors de la compilation du projet global.
285
+ - version 2.4 : (1) Nouvelle fonction de traduction de texte intégral PDF ; (2) Nouvelle fonction de permutation de position de la zone d'entrée ; (3) Nouvelle option de mise en page verticale ; (4) Amélioration des fonctions multi-thread de plug-in.
286
+ - version 2.3 : Amélioration de l'interactivité multithread.
287
+ - version 2.2 : Les plugins de fonctions peuvent désormais être rechargés à chaud.
288
+ - version 2.1 : Disposition pliable
289
+ - version 2.0 : Introduction de plugins de fonctions modulaires
290
+ - version 1.0 : Fonctionnalités de base
291
+
292
+ gpt_academic développeur QQ groupe-2:610599535
293
+
294
+ - Problèmes connus
295
+ - Certains plugins de traduction de navigateur perturbent le fonctionnement de l'interface frontend de ce logiciel
296
+ - Des versions gradio trop hautes ou trop basses provoquent de nombreuses anomalies
297
+
298
+ ## Référence et apprentissage
299
 
300
  ```
301
+ De nombreux autres excellents projets ont été référencés dans le code, notamment :
302
+
303
+ # Projet 1 : ChatGLM-6B de Tsinghua :
304
+ https://github.com/THUDM/ChatGLM-6B
305
+
306
+ # Projet 2 : JittorLLMs de Tsinghua :
307
+ https://github.com/Jittor/JittorLLMs
308
 
309
+ # Projet 3 : Edge-GPT :
310
+ https://github.com/acheong08/EdgeGPT
311
+
312
+ # Projet 4 : ChuanhuChatGPT :
313
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
314
 
315
+ # Projet 5 : ChatPaper :
316
+ https://github.com/kaixindelele/ChatPaper
 
317
 
318
+ # Plus :
319
+ https://github.com/gradio-app/gradio
320
+ https://github.com/fghrsh/live2d_demo
321
+ ```
docs/README_JP.md CHANGED
@@ -2,301 +2,325 @@
2
  >
3
  > このReadmeファイルは、このプロジェクトのmarkdown翻訳プラグインによって自動的に生成されたもので、100%正確ではない可能性があります。
4
  >
 
 
 
 
5
 
6
- # <img src="logo.png" width="40" > ChatGPT 学術最適化
7
 
8
- **このプロジェクトが好きだったら、スターをつけてください。もし、より使いやすい学術用のショートカットキーまたはファンクションプラグインを発明した場合は、issueを発行するかpull requestを作成してください。また、このプロジェクト自体によって翻訳されたREADMEは[英語説明書|](docs/README_EN.md)[日本語説明書|](docs/README_JP.md)[ロシア語説明書|](docs/README_RS.md)[フランス語説明書](docs/README_FR.md)もあります。**
 
9
 
10
- > **注意事項**
11
  >
12
- > 1. **赤色**のラベルが付いているファンクションプラグイン(ボタン)のみファイルを読み込めます。一部のプラグインはプラグインエリアのドロップダウンメニューにあります。新しいプラグインのPRを歓迎いたします!
13
  >
14
- > 2. このプロジェクトの各ファイルの機能は`self_analysis.md`(自己解析レポート)で詳しく説明されています。バージョンが追加されると、関連するファンクションプラグインをクリックして、GPTを呼び出して自己解析レポートを再生成することができます。一般的な質問は`wiki`にまとめられています。(`https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98`)
15
-
16
 
17
- <div align="center">
18
-
19
- 機能 | 説明
20
- --- | ---
21
- ワンクリック整形 | 論文の文法エラーを一括で正確に修正できます。
22
- ワンクリック日英翻訳 | 日英翻訳には、ワンクリックで対応できます。
23
- ワンクリックコード説明 | コードの正しい表示と説明が可能です。
24
- [カスタムショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | カスタムショートカットキーをサポートします。
25
- [プロキシサーバーの設定](https://www.bilibili.com/video/BV1rc411W7Dr) | プロキシサーバーの設定をサポートします。
26
- モジュラーデザイン | カスタム高階関数プラグインと[関数プラグイン]、プラグイン[ホット更新]のサポートが可能です。詳細は[こちら](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
27
- [自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン][ワンクリック理解](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)このプロジェクトのソースコード
28
- [プログラム解析機能](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] ワンクリックで別のPython/C/C++/Java/Lua/...プロジェクトツ��ーを解析できます。
29
- 論文読解 | [関数プラグイン] LaTeX論文の全文をワンクリックで解読し、要約を生成します。
30
- LaTeX全文翻訳、整形 | [関数プラグイン] ワンクリックでLaTeX論文を翻訳または整形できます。
31
- 注釈生成 | [関数プラグイン] ワンクリックで関数の注釈を大量に生成できます。
32
- チャット分析レポート生成 | [関数プラグイン] 実行後、まとめレポートを自動生成します。
33
- [arxivヘルパー](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] 入力したarxivの記事URLで要約をワンクリック翻訳+PDFダウンロードができます。
34
- [PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文タイトルと要約を抽出し、全文を翻訳します(マルチスレッド)。
35
- [Google Scholar Integratorヘルパー](https://www.bilibili.com/video/BV19L411U7ia) | [関数プラグイン] 任意のGoogle Scholar検索ページURLを指定すると、gptが興味深い記事を選択します。
36
- 数式/画像/テーブル表示 | 数式のTex形式とレンダリング形式を同時に表示できます。数式、コードのハイライトをサポートしています。
37
- マルチスレッド関数プラグインサポート | ChatGPTをマルチスレッドで呼び出すことができ、大量のテキストやプログラムを簡単に処理できます。
38
- ダークグラジオ[テーマ](https://github.com/binary-husky/chatgpt_academic/issues/173)の起動 | 「/?__dark-theme=true」というURLをブラウザに追加することで、ダークテーマに切り替えることができます。
39
- [多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)をサポート、[API2D](https://api2d.com/)インターフェースをサポート | GPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)による同時サポートは、とても素晴らしいですね!
40
- huggingface免科学上网[オンライン版](https://huggingface.co/spaces/qingxu98/gpt-academic) | huggingfaceにログイン後、[このスペース](https://huggingface.co/spaces/qingxu98/gpt-academic)をコピーしてください。
41
- ...... | ......
42
 
43
 
44
- </div>
45
-
46
-
47
- - 新しいインターフェース(config.pyのLAYOUTオプションを変更するだけで、「左右レイアウト」と「上下レイアウト」を切り替えることができます)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  <div align="center">
49
  <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
50
- </div>
51
-
52
 
53
- - すべてのボタンは、functional.pyを読み込んで動的に生成されます。カスタム機能を自由に追加して、クリップボードを解放します
54
  <div align="center">
55
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
56
  </div>
57
 
58
- - 色を修正/修正
 
59
  <div align="center">
60
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
61
  </div>
62
 
63
- - 出力に数式が含まれている場合、TeX形式とレンダリング形式の両方が表示され、コピーと読み取りが容易になります
 
64
  <div align="center">
65
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
66
  </div>
67
 
68
- - プロジェクトのコードを見るのが面倒?chatgptに整備されたプロジェクトを直接与えましょう
 
69
  <div align="center">
70
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
71
  </div>
72
 
73
- - 多数の大規模言語モデルの混合呼び出し(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
 
 
74
  <div align="center">
75
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
76
  </div>
77
 
78
- 多数の大規模言語モデルの混合呼び出し[huggingfaceテスト版](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta)(huggigface版はchatglmをサポートしていません)
79
 
 
80
 
81
- ---
82
 
83
- ## インストール-方法1:直接運転 (Windows、LinuxまたはMacOS)
84
 
85
- 1. プロジェクトをダウンロードします。
86
  ```sh
87
  git clone https://github.com/binary-husky/chatgpt_academic.git
88
  cd chatgpt_academic
89
  ```
90
 
91
- 2. API_KEYとプロキシ設定を構成する
92
 
93
- `config.py`で、海外のProxyとOpenAI API KEYを構成して説明します。
94
- ```
95
- 1.あなたが中国にいる場合、OpenAI APIをスムーズに使用するには海外プロキシを設定する必要があります。構成の詳細については、config.py(1.その中のUSE_PROXYをTrueに変更し、2.手順に従ってプロキシを変更する)を詳細に読んでください。
96
- 2. OpenAI API KEYを構成する。OpenAIのウェブサイトでAPI KEYを取得してください。一旦API KEYを手に入れると、config.pyファイルで設定するだけです。
97
- 3.プロキシネットワークに関連する問題(ネットワークタイムアウト、プロキシが動作しない)をhttps://github.com/binary-husky/chatgpt_academic/issues/1にまとめました。
98
- ```
99
- (P.S. プログラム実行時にconfig.pyの隣にconfig_private.pyという名前のプライバシー設定ファイルを作成し、同じ名前の設定を上書きするconfig_private.pyが存在するかどうかを優先的に確認します。そのため、私たちの構成読み取りロジックを理解できる場合は、config.pyの隣にconfig_private.pyという名前の新しい設定ファイルを作成し、その中のconfig.pyから設定を移動してください。config_private.pyはgitで保守されていないため、プライバシー情報をより安全にすることができます。)
100
 
101
- 3. 依存関係をインストールします。
102
  ```sh
103
- # 選択肢があります。
104
  python -m pip install -r requirements.txt
105
 
 
 
 
 
 
106
 
107
- # (選択肢2) もしAnacondaを使用する場合、手順は同様です:
108
- # (選択肢2.1) conda create -n gptac_venv python=3.11
109
- # (選択肢2.2) conda activate gptac_venv
110
- # (選択肢2.3) python -m pip install -r requirements.txt
111
 
112
- # 注: 公式のpipソースまたはAlibabaのpipソースを使用してください。 別のpipソース(例:一部の大学のpip)は問題が発生する可能性があります。 一時的なソースの切り替え方法:
113
- # python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
114
- ```
115
 
116
- もしあなたが清華ChatGLMをサポートする必要がある場合、さらに多くの依存関係をインストールする必要があります(Pythonに慣れない方やコンピューターの設定が十分でない方は、試みないことをお勧めします):
117
  ```sh
118
- python -m pip install -r request_llm/requirements_chatglm.txt
119
- ```
120
 
121
- 4. 実行
122
- ```sh
123
- python main.py
124
- ```
125
 
126
- 5. 関数プラグインのテスト
127
- ```
128
- - Pythonプロジェクト分析のテスト
129
- 入力欄に `./crazy_functions/test_project/python/dqn` と入力し、「Pythonプロジェクト全体の解析」をクリックします。
130
- - 自己コード解読のテスト
131
- 「[マルチスレッドデモ] このプロジェクト自体を解析します(ソースを翻訳して解読します)」をクリックします。
132
- - 実験的な機能テンプレート関数のテスト(GPTが「今日の歴史」に何が起こったかを回答することが求められます)。この関数をテンプレートとして使用して、より複雑な機能を実装できます。
133
- 「[関数プラグインテンプレートデモ] 今日の歴史」をクリックします。
134
- - 関数プラグインエリアのドロップダウンメニューには他にも選択肢があります。
135
  ```
136
 
137
- ## インストール方法2:Dockerを使用する(Linux)
 
138
 
139
- 1. ChatGPTのみ(大多数の人にお勧めです)
140
- ``` sh
141
- # プロジェクトのダウンロード
142
- git clone https://github.com/binary-husky/chatgpt_academic.git
143
- cd chatgpt_academic
144
- # 海外プロキシとOpenAI API KEYの設定
145
- config.pyを任意のテキストエディタで編集する
146
- # インストール
147
- docker build -t gpt-academic .
148
- # 実行
149
- docker run --rm -it --net=host gpt-academic
150
 
151
- # 関数プラグインのテスト
152
- ## 関数プラグインテンプレート関数のテスト(GPTが「今日の歴史」に何が起こったかを回答することが求められます)。この関数をテンプレートとして使用して、より複雑な機能を実装できます。
153
- 「[関数プラグインテンプレートデモ] 今日の歴史」をクリックします。
154
- ## Latexプロジェクトの要約を書くテスト
155
- 入力欄に./crazy_functions/test_project/latex/attentionと入力し、「テックス論文を読んで要約を書く」をクリックします。
156
- ## Pythonプロジェクト分析のテスト
157
- 入力欄に./crazy_functions/test_project/python/dqnと入力し、[Pythonプロジェクトの全解析]をクリックします。
158
 
159
- 関数プラグインエリアのドロップダウンメニューには他にも選択肢があります。
 
 
 
 
 
 
 
160
  ```
161
 
162
- 2. ChatGPT + ChatGLM(Dockerに非常に詳しい人+十分なコンピューター設定が必要)
163
 
 
164
 
 
 
 
 
 
165
 
166
- ```sh
167
- # Dockerfileの編集
168
- cd docs && nano Dockerfile+ChatGLM
169
- # ビルド方法
170
- docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
171
- # 実行方法 (1) 直接実行:
172
- docker run --rm -it --net=host --gpus=all gpt-academic
173
- # 実行方法 (2) コンテナに入って調整する:
174
- docker run --rm -it --net=host --gpus=all gpt-academic bash
175
  ```
176
 
177
- ## インストール方法3:その他のデ���ロイ方法
178
 
179
- 1. クラウドサーバーデプロイ
180
- [デプロイwiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
 
 
181
 
182
- 2. WSL2を使用 (Windows Subsystem for Linux)
183
- [デプロイwiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
 
 
 
184
 
185
 
186
- ## インストール-プロキシ設定
187
- 1. 通常の方法
188
- [プロキシを設定する](https://github.com/binary-husky/chatgpt_academic/issues/1)
189
 
190
- 2. 初心者向けチュートリアル
191
- [初心者向けチュートリアル](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
192
 
 
 
193
 
194
- ---
 
195
 
196
- ## カスタムボタンの追加(学術ショートカットキー)
 
197
 
198
- `core_functional.py`を任意のテキストエディタで開き、以下のエントリーを追加し、プログラムを再起動してください。(ボタンが追加されて表示される場合、前置詞と後置詞はホット編集がサポートされているため、プログラムを再起動せずに即座に有効になります。)
 
 
 
 
199
 
200
- 例:
 
 
201
  ```
202
- "超级英译中": {
203
- # 前置詞 - あなたの要求を説明するために使用されます。翻訳、コードの説明、編集など。
204
- "Prefix": "以下のコンテンツを中国語に翻訳して、マークダウンテーブルを使用して専門用語を説明してください。\n\n",
205
 
206
- # 後置詞 - プレフィックスと共に使用すると、入力内容を引用符で囲むことができます。
207
  "Suffix": "",
208
  },
209
  ```
210
-
211
  <div align="center">
212
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
213
  </div>
214
 
 
215
 
216
- ---
217
-
218
- ## いくつかの機能の例
219
-
220
- ### 画像表示:
221
 
 
 
 
 
222
  <div align="center">
223
- <img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
224
  </div>
225
 
226
 
227
- ### プログラムが自己解析できる場合:
 
 
 
 
 
228
 
 
229
  <div align="center">
230
- <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
 
231
  </div>
232
 
 
233
  <div align="center">
234
- <img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
235
  </div>
236
 
237
- ### 他のPython/Cppプロジェクトの解析:
238
 
 
239
  <div align="center">
240
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
241
  </div>
242
 
243
  <div align="center">
244
- <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
245
  </div>
246
 
247
- ### Latex論文の一括読解と要約生成
248
-
249
  <div align="center">
250
- <img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
251
  </div>
252
 
253
- ### 自動報告生成
254
-
255
  <div align="center">
256
- <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
257
- <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
258
- <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
259
  </div>
260
 
261
- ### モジュール化された機能デザイン
262
-
263
  <div align="center">
264
- <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
265
- <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
266
  </div>
267
 
 
 
 
 
268
 
269
- ### ソースコードの英語翻訳
270
-
271
  <div align="center">
272
- <img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
273
  </div>
274
 
275
- ## Todo およびバージョン計画:
276
- - version 3.2+ (todo): 関数プラグインがより多くのパラメーターインターフェースをサポートするようになります。
277
- - version 3.1: 複数のgptモデルを同時にクエリし、api2dをサポートし、複数のapikeyの負荷分散をサポートします。
278
- - version 3.0: chatglmおよび他の小型llmのサポート
279
- - version 2.6: プラグイン構造を再構成し、相互作用性を高め、より多くのプラグインを追加しました。
280
- - version 2.5: 自己更新。総括的な大規模プロジェクトのソースコードをまとめた場合、テキストが長すぎる、トークンがオーバーフローする問題を解決します。
281
- - version 2.4: (1)PDF全文翻訳機能を追加。(2)入力エリアの位置を切り替える機能を追加。(3)垂直レイアウトオプションを追加。(4)マルチスレッド関数プラグインの最適化。
282
- - version 2.3: 多スレッドの相互作用性を向上させました。
283
- - version 2.2: 関数プラグインでホットリロードをサポート
284
- - version 2.1: 折りたたみ式レイアウト
285
- - version 2.0: モジュール化された関数プラグインを導入
286
- - version 1.0: 基本機能
287
 
288
- ## 参考および学習
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
289
 
 
 
 
290
 
291
- 以下は中国語のマークダウンファイルです。日本語に翻訳してください。既存のマークダウンコマンドを変更しないでください:
292
 
293
  ```
294
- 多くの優秀なプロジェクトの設計を参考にしています。主なものは以下の通りです:
 
 
 
295
 
296
- # 参考プロジェクト1:ChuanhuChatGPTから多くのテクニックを借用
 
 
 
 
 
 
297
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
298
 
299
- # 参考プロジェクト2:清華ChatGLM-6B
300
- https://github.com/THUDM/ChatGLM-6B
301
- ```
302
 
 
 
 
 
 
2
  >
3
  > このReadmeファイルは、このプロジェクトのmarkdown翻訳プラグインによって自動的に生成されたもので、100%正確ではない可能性があります。
4
  >
5
+ > When installing dependencies, please strictly choose the versions specified in `requirements.txt`.
6
+ >
7
+ > `pip install -r requirements.txt`
8
+ >
9
 
10
+ # <img src="logo.png" width="40" > GPT 学术优化 (GPT Academic)
11
 
12
+ **もしこのプロジェクトが好きなら、星をつけてください。もしあなたがより良いアカデミックショートカットまたは機能プラグインを思いついた場合、Issueをオープンするか pull request を送信してください。私たちはこのプロジェクト自体によって翻訳された[英語 |](README_EN.md)[日本語 |](README_JP.md)[한국어 |](https://github.com/mldljyh/ko_gpt_academic)[Русский |](README_RS.md)[Français](README_FR.md)のREADMEも用意しています。
13
+ GPTを使った任意の言語にこのプロジェクトを翻訳するには、[`multi_language.py`](multi_language.py)を読んで実行してください。 (experimental)。
14
 
15
+ > **注意**
16
  >
17
+ > 1. **赤色**で表示された関数プラグイン(ボタン)のみ、ファイルの読み取りをサポートしています。一部のプラグインは、プラグインエリアの**ドロップダウンメニュー**内にあります。また、私たちはどんな新しいプラグインのPRでも、**最優先**で歓迎し、処理します!
18
  >
19
+ > 2. このプロジェクトの各ファイルの機能は、自己解析の詳細説明書である[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)で説明されています。バージョンが進化するにつれて、関連する関数プラグインをいつでもクリックし、GPTを呼び出してプロジェクトの自己解析レポートを再生成することができます。よくある問題は[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)にまとめられています。[インストール方法](#installation)。
 
20
 
21
+ > 3. このプロジェクトは、chatglmやRWKV、パンクなど、国内の大規模自然言語モデルを利用することをサポートし、試みることを奨励します。複数のAPIキーを共存することができ、設定ファイルに`API_KEY="openai-key1,openai-key2,api2d-key3"`のように記入することができます。`API_KEY`を一時的に変更する場合は、入力エリアに一時的な`API_KEY`を入力してEnterキーを押せば、それが有効になります。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
 
24
+ <div align="center">機能 | 説明
25
+ --- | ---
26
+ 一键校正 | 一键で校正可能、論文の文法エラーを検索することができる
27
+ 一键中英翻訳 | 一键で中英翻訳可能
28
+ 一键コード解説 | コードを表示し、解説し、生成し、コードに注釈をつけることができる
29
+ [自分でカスタマイズ可能なショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | 自分でカスタマイズ可能なショートカットキーをサポートする
30
+ モジュール化された設計 | カスタマイズ可能な[強力な関数プラグイン](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions)をサポートし、プラグインは[ホットアップデート](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)に対応している
31
+ [自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] [一键読解](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)このプロジェクトのソースコード
32
+ プログラム解析 | [関数プラグイン] 一鍵で他のPython/C/C++/Java/Lua/...プロジェクトを分析できる
33
+ 論文の読み、[翻訳](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] LaTex/ PDF論文の全文を一鍵で読み解き、要約を生成することができる
34
+ LaTex全文[翻訳](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[校正](https://www.bilibili.com/video/BV1FT411H7c5/) | [関数プラグイン] LaTex論文の翻訳または校正を一鍵で行うことができる
35
+ 一括で注釈を生成 | [関数プラグイン] 一鍵で関数に注釈をつけることができる
36
+ Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数プラグイン] 上記の5種類の言語の[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)を見たことがあります���?
37
+ チャット分析レポート生成 | [関数プラグイン] 実行後、自動的に概要報告書を生成する
38
+ [PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文からタイトルと要約を抽出し、全文を翻訳する(マルチスレッド)
39
+ [Arxivアシスタント](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] arxiv記事のURLを入力するだけで、要約を一鍵翻訳し、PDFをダウンロードできる
40
+ [Google Scholar 総合アシスタント](https://www.bilibili.com/video/BV19L411U7ia) | [関数プラグイン] 任意のGoogle Scholar検索ページURLを指定すると、gptが[related works](https://www.bilibili.com/video/BV1GP411U7Az/)を作成する
41
+ インターネット情報収集+GPT | [関数プラグイン] まずGPTに[インターネットから情報を収集](https://www.bilibili.com/video/BV1om4y127ck)してから質問に回答させ、情報が常に最新であるようにする
42
+ 数式/画像/表表示 | 数式の[tex形式とレンダリング形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)を同時に表示し、数式、コードハイライトをサポートしている
43
+ マルチスレッド関数プラグインがサポートされている | chatgptをマルチスレッドで呼び出し、[大量のテキスト](https://www.bilibili.com/video/BV1FT411H7c5/)またはプログラムを一鍵で処理できる
44
+ ダークグラジオ[テーマの起動](https://github.com/binary-husky/chatgpt_academic/issues/173) | ブラウザのURLの後ろに```/?__theme=dark```を追加すると、ダークテーマを切り替えることができます。
45
+ [多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)がサポートされ、[API2D](https://api2d.com/)がサポートされている | 同時にGPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[復旦MOSS](https://github.com/OpenLMLab/MOSS)に対応
46
+ より多くのLLMモデルが接続され、[huggingfaceデプロイ](https://huggingface.co/spaces/qingxu98/gpt-academic)がサポートされている | Newbingインターフェイス(Newbing)、清華大学の[Jittorllm](https://github.com/Jittor/JittorLLMs)のサポート[LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV)と[盘古α](https://openi.org.cn/pangu/)
47
+ さらに多くの新機能(画像生成など)を紹介する... | この文書の最後に示す...
48
+
49
+ - 新しいインターフェース(`config.py`のLAYOUTオプションを変更することで、「左右配置」と「上下配置」を切り替えることができます)
50
  <div align="center">
51
  <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
52
+ </div>- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to free the clipboard.
 
53
 
 
54
  <div align="center">
55
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
56
  </div>
57
 
58
+ - Polishing/Correction
59
+
60
  <div align="center">
61
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
62
  </div>
63
 
64
+ - If the output contains formulas, they are displayed in both TeX and rendering forms, making it easy to copy and read.
65
+
66
  <div align="center">
67
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
68
  </div>
69
 
70
+ - Don't feel like looking at the project code? Just ask chatgpt directly.
71
+
72
  <div align="center">
73
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
74
  </div>
75
 
76
+
77
+ - Mixed calls of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
78
+
79
  <div align="center">
80
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
81
  </div>
82
 
83
+ ---
84
 
85
+ # Installation
86
 
87
+ ## Installation-Method 1: Directly run (Windows, Linux or MacOS)
88
 
89
+ 1. Download the project.
90
 
 
91
  ```sh
92
  git clone https://github.com/binary-husky/chatgpt_academic.git
93
  cd chatgpt_academic
94
  ```
95
 
96
+ 2. Configure the API_KEY.
97
 
98
+ Configure the API KEY and other settings in `config.py` and [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1).
99
+
100
+ (P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py`, and use the configuration in it to override the same name configuration in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variables` > `config_private.py` > `config.py`)
101
+
102
+ 3. Install dependencies.
 
 
103
 
 
104
  ```sh
105
+ # (Choose I: If familiar with Python)(Python version 3.9 or above, the newer the better) Note: Use the official pip source or Ali pip source. Temporary switching source method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
106
  python -m pip install -r requirements.txt
107
 
108
+ # (Choose II: If not familiar with Python) Use anaconda, the steps are the same (https://www.bilibili.com/video/BV1rc411W7Dr):
109
+ conda create -n gptac_venv python=3.11 # Create anaconda environment.
110
+ conda activate gptac_venv # Activate the anaconda environment.
111
+ python -m pip install -r requirements.txt # This step is the same as the pip installation step.
112
+ ```
113
 
114
+ <details><summary>If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand.</summary>
115
+ <p>
 
 
116
 
117
+ [Optional Steps] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (precondition: familiar with Python + used Pytorch + computer configuration). Strong enough):
 
 
118
 
 
119
  ```sh
120
+ # Optional step I: support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters normally", refer to the following: 1: The version installed above is torch+cpu version, using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).
121
+ python -m pip install -r request_llm/requirements_chatglm.txt
122
 
123
+ # Optional Step II: Support Fudan MOSS.
124
+ python -m pip install -r request_llm/requirements_moss.txt
125
+ git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, it must be in the project root.
 
126
 
127
+ # 【Optional Step III】Ensure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports the docker solution):
128
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
 
 
 
 
 
 
 
129
  ```
130
 
131
+ </p>
132
+ </details>
133
 
 
 
 
 
 
 
 
 
 
 
 
134
 
 
 
 
 
 
 
 
135
 
136
+ 4. Run.
137
+
138
+ ```sh
139
+ python main.py
140
+ ```5. Testing Function Plugin
141
+ ```
142
+ - Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions
143
+ Click "[Function Plugin Template Demo] Today in History"
144
  ```
145
 
146
+ ## Installation-Methods 2: Using Docker
147
 
148
+ 1. Only ChatGPT (recommended for most people)
149
 
150
+ ``` sh
151
+ git clone https://github.com/binary-husky/chatgpt_academic.git # Download project
152
+ cd chatgpt_academic # Enter path
153
+ nano config.py # Edit config.py with any text editor ‑ configure "Proxy," "API_KEY," "WEB_PORT" (e.g., 50923) and more
154
+ docker build -t gpt-academic . # installation
155
 
156
+ #(Last step-Option 1) In a Linux environment, `--net=host` is more convenient and quick
157
+ docker run --rm -it --net=host gpt-academic
158
+ #(Last step-Option 2) In a macOS/windows environment, the -p option must be used to expose the container port (e.g., 50923) to the port on the host.
159
+ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
 
 
 
 
 
160
  ```
161
 
162
+ 2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker)
163
 
164
+ ``` sh
165
+ # Modify docker-compose.yml, delete plans 1 and 3, and retain plan 2. Modify the configuration of plan 2 in docker-compose.yml, and reference the comments for instructions.
166
+ docker-compose up
167
+ ```
168
 
169
+ 3. ChatGPT + LLAMA + Pangu + RWKV (requires familiarity with Docker)
170
+ ``` sh
171
+ # Modify docker-compose.yml, delete plans 1 and 2, and retain plan 3. Modify the configuration of plan 3 in docker-compose.yml, and reference the comments for instructions.
172
+ docker-compose up
173
+ ```
174
 
175
 
176
+ ## Installation-Method 3: Other Deployment Methods
 
 
177
 
178
+ 1. How to use proxy URL/Microsoft Azure API
179
+ Configure API_URL_REDIRECT according to the instructions in `config.py`.
180
 
181
+ 2. Remote Cloud Server Deployment (requires cloud server knowledge and experience)
182
+ Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
183
 
184
+ 3. Using WSL2 (Windows Subsystem for Linux Subsystem)
185
+ Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
186
 
187
+ 4. How to run on a secondary URL (such as `http://localhost/subpath`)
188
+ Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
189
 
190
+ 5. Run with docker-compose
191
+ Please read docker-compose.yml and follow the instructions provided therein.
192
+ ---
193
+ # Advanced Usage
194
+ ## Customize new convenience buttons/custom function plugins
195
 
196
+ 1. Custom new convenience buttons (academic shortcut keys)
197
+ Open `core_functional.py` with any text editor, add the item as follows, and restart the program. (If the button has been added successfully and is visible, the prefix and suffix support hot modification without restarting the program.)
198
+ example:
199
  ```
200
+ "Super English to Chinese Translation": {
201
+ # Prefix, which will be added before your input. For example, used to describe your request, such as translation, code interpretation, polish, etc.
202
+ "Prefix": "Please translate the following content into Chinese, and explain the proper nouns in the text in a markdown table one by one:\n\n",
203
 
204
+ # Suffix, which will be added after your input. For example, in combination with the prefix, you can surround your input content with quotation marks.
205
  "Suffix": "",
206
  },
207
  ```
 
208
  <div align="center">
209
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
210
  </div>
211
 
212
+ 2. Custom function plugins
213
 
214
+ Write powerful function plugins to perform any task you can and cannot think of.
215
+ The difficulty of writing and debugging plugins in this project is low, and as long as you have a certain amount of python basic knowledge, you can follow the template provided by us to achieve your own plugin functions.
216
+ For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
 
 
217
 
218
+ ---
219
+ # Latest Update
220
+ ## New feature dynamics.
221
+ 1. ダイアログの保存機能。関数プラグインエリアで '現在の会話を保存' を呼び出すと、現在のダイアログを読み取り可能で復元可能なHTMLファイルとして保存できます。さらに、関数プラグインエリア(ドロップダウンメニュー)で 'ダイアログの履歴保存ファイルを読み込む' を呼び出すことで、以前の会話を復元することができます。Tips:ファイルを指定せずに 'ダイアログの履歴保存ファイルを読み込む' をクリックすることで、過去のHTML保存ファイルのキャッシュを表示することができます。'すべてのローカルダイアログの履歴を削除' をクリックすることで、すべてのHTML保存ファイルのキャッシュを削除できます。
222
  <div align="center">
223
+ <img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500">
224
  </div>
225
 
226
 
227
+ 2. 報告書を生成します。ほとんどのプラグインは、実行が終了した後に作業報告書を生成します。
228
+ <div align="center">
229
+ <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300">
230
+ <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300">
231
+ <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300">
232
+ </div>
233
 
234
+ 3. モジュール化された機能設計、簡単なインターフェースで強力な機能をサポートする。
235
  <div align="center">
236
+ <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400">
237
+ <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400">
238
  </div>
239
 
240
+ 4. 自己解決可能なオープンソースプロジェクトです。
241
  <div align="center">
242
+ <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500">
243
  </div>
244
 
 
245
 
246
+ 5. 他のオープンソースプロジェクトの解読、容易である。
247
  <div align="center">
248
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500">
249
  </div>
250
 
251
  <div align="center">
252
+ <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500">
253
  </div>
254
 
255
+ 6. [Live2D](https://github.com/fghrsh/live2d_demo)のデコレート小機能です。(デフォルトでは閉じてますが、 `config.py`を変更する必要があります。)
 
256
  <div align="center">
257
+ <img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500">
258
  </div>
259
 
260
+ 7. 新たにMOSS大言語モデルのサポートを追加しました。
 
261
  <div align="center">
262
+ <img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500">
 
 
263
  </div>
264
 
265
+ 8. OpenAI画像生成
 
266
  <div align="center">
267
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500">
 
268
  </div>
269
 
270
+ 9. OpenAIオーディオの解析とサマリー
271
+ <div align="center">
272
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500">
273
+ </div>
274
 
275
+ 10. 全文校正されたLaTeX
 
276
  <div align="center">
277
+ <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500">
278
  </div>
279
 
 
 
 
 
 
 
 
 
 
 
 
 
280
 
281
+ ## バージョン:
282
+ - version 3.5(作業中):すべての関数プラグインを自然言語で呼び出すことができるようにする(高い優先度)。
283
+ - version 3.4(作業中):chatglmのローカルモデルのマルチスレッドをサポートすることで、機能を改善する。
284
+ - version 3.3:+Web情報の総合機能
285
+ - version 3.2:関数プラグインでさらに多くのパラメータインターフェイスをサポートする(ダイアログの保存機能、任意の言語コードの解読+同時に任意のLLM組み合わせに関する問い合わせ)
286
+ - version 3.1:複数のGPTモデルを同時に質問できるようになりました! api2dをサポートし、複数のAPIキーを均等に負荷分散することができます。
287
+ - version 3.0:chatglmとその他の小型LLMのサポート。
288
+ - version 2.6:プラグイン構造を再構築し、対話内容を高め、より多くのプラグインを追加しました。
289
+ - version 2.5:自己アップデートし、長文書やトークンのオーバーフローの問題を解決しました。
290
+ - version 2.4:(1)全文翻訳のPDF機能を追加しました。(2)入力エリアの位置切り替え機能を追加しました。(3)垂直レイアウトオプションを追加しました。(4)マルチスレッド関数プラグインを最適化しました。
291
+ - version 2.3:マルチスレッド性能の向上。
292
+ - version 2.2:関数プラグインのホットリロードをサポートする。
293
+ - version 2.1:折りたたみ式レイアウト。
294
+ - version 2.0:モジュール化された関数プラグインを導入。
295
+ - version 1.0:基本機能
296
+
297
+ gpt_academic開発者QQグループ-2:610599535
298
 
299
+ - 既知の問題
300
+ - 一部のブラウザ翻訳プラグインが、このソフトウェアのフロントエンドの実行を妨害する
301
+ - gradioバージョンが高すぎるか低すぎると、多くの異常が引き起こされる
302
 
303
+ ## 参考学習
304
 
305
  ```
306
+ コードの中には、他の優れたプロジェクトの設計から参考にしたものがたくさん含まれています:
307
+
308
+ # プロジェクト1:清華ChatGLM-6B:
309
+ https://github.com/THUDM/ChatGLM-6B
310
 
311
+ # プロジェクト2:清華JittorLLMs:
312
+ https://github.com/Jittor/JittorLLMs
313
+
314
+ # プロジェクト3:Edge-GPT:
315
+ https://github.com/acheong08/EdgeGPT
316
+
317
+ # プロジェクト4:ChuanhuChatGPT:
318
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
319
 
320
+ # プロジェクト5ChatPaper:
321
+ https://github.com/kaixindelele/ChatPaper
 
322
 
323
+ # その他:
324
+ https://github.com/gradio-app/gradio
325
+ https://github.com/fghrsh/live2d_demo
326
+ ```
docs/README_RS.md CHANGED
@@ -2,204 +2,197 @@
2
  >
3
  > Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным.
4
  >
 
5
 
6
- # <img src="logo.png" width="40" > ChatGPT Academic Optimization
 
7
 
8
- **Если вам понравился этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные академические ярлыки или функциональные плагины, не стесняйтесь создавать запросы на изменение или пул-запросы. Мы также имеем [README на английском языке](docs/README_EN.md), переведенный этим же проектом.
 
 
 
 
 
 
9
 
10
  > **Примечание**
11
  >
12
- > 1. Пожалуйста, обратите внимание, что только функциonal plugins (buttons) с **красным цветом** могут читать файлы, некоторые из которых находятся в **выпадающем меню** плагинов. Кроме того, мы приветствуем и обрабатываем любые новые плагины с **наивысшим приоритетом**!
13
- >
14
- > 2. Функции каждого файла в этом проекте подробно описаны в собственном анализе [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) . При повторных итерациях вы также можете вызывать обновленный отчет функций проекта, щелкнув соответствующий функциональный плагин GPT. Часто задаваемые вопросы собраны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) .
15
-
16
- <div align="center">
17
-
18
- Функция | Описание
19
- --- | ---
20
- Редактирование одним кликом | Поддержка редактирования одним кликом, поиск грамматических ошибок в ака��емических статьях
21
- Переключение языков "Английский-Китайский" одним кликом | Одним кликом переключайте языки "Английский-Китайский"
22
- Разъяснение программного кода одним кликом | Вы можете правильно отобразить и объяснить программный код.
23
- [Настраиваемые сочетания клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настраиваемых сочетаний клавиш
24
- [Настройка сервера-прокси](https://www.bilibili.com/video/BV1rc411W7Dr) | Поддержка настройки сервера-прокси
25
- Модульный дизайн | Поддержка настраиваемых функциональных плагинов высших порядков и функциональных плагинов, поддерживающих [горячее обновление](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
26
- [Автоанализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Прочтение в один клик](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) кода программы проекта
27
- [Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Один клик для проанализирования дерева других проектов Python/C/C++/Java/Lua/...
28
- Чтение статей| [Функциональный плагин] Одним кликом прочитайте весь латех (LaTex) текст статьи и сгенерируйте краткое описание
29
- Перевод и редактирование всех статей из LaTex | [Функциональный плагин] Перевод или редактирование LaTex-статьи всего одним нажатием кнопки
30
- Генерация комментариев в пакетном режиме | [Функциональный плагин] Одним кликом сгенерируйте комментарии к функциям в пакетном режиме
31
- Генерация отчетов пакета CHAT | [Функциональный плагин] Автоматически создавайте сводные отчеты после выполнения
32
- [Помощник по arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи arxiv, чтобы легко перевести резюме и загрузить PDF-файл
33
- [Перевод полного текста статьи в формате PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлеките заголовок статьи, резюме и переведите весь текст статьи (многопоточно)
34
- [Помощник интеграции Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] Дайте GPT выбрать для вас интересные статьи на любой странице поиска Google Scholar.
35
- Отображение формул/изображений/таблиц | Одновременно отображается tex-форма и рендер-форма формул, поддержка формул, высокоскоростных кодов
36
- Поддержка функциональных плагинов многопоточности | Поддержка многопоточной работы с плагинами, обрабатывайте огромные объемы текста или программы одним кликом
37
- Запуск темной темы gradio[подробнее](https://github.com/binary-husky/chatgpt_academic/issues/173) | Добавьте / ?__dark-theme=true в конец URL браузера, чтобы переключиться на темную тему.
38
- [Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), поддержка API2D | Находиться между GPT3.5, GPT4 и [清华ChatGLM](https://github.com/THUDM/ChatGLM-6B) должно быть очень приятно, не так ли?
39
- Альтернатива huggingface без использования научной сети [Онлайн-эксперимент](https://huggingface.co/spaces/qingxu98/gpt-academic) | Войдите в систему, скопируйте пространство [этот пространственный URL](https://huggingface.co/spaces/qingxu98/gpt-academic)
40
- …… | ……
41
-
42
 
43
- </div>
44
-
45
- - Новый интерфейс (вы можете изменить настройку LAYOUT в config.py, чтобы переключаться между "горизонтальным расположением" и "вертикальным расположением")
46
- <div align="center">
47
- <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
48
- </div>
49
 
 
50
 
51
- Вы профессиональный переводчик научных статей.
52
 
53
- - Все кнопки генерируются динамически путем чтения functional.py и могут быть легко настроены под пользовательские потребности, освобождая буфер обмена.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  <div align="center">
55
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
56
  </div>
57
 
58
- - Редактирование/корректирование
59
  <div align="center">
60
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
61
  </div>
62
 
63
- - Если вывод содержит формулы, они отображаются одновременно как в формате tex, так и в рендеринговом формате для удобства копирования и чтения.
64
  <div align="center">
65
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
66
  </div>
67
 
68
- - Лень смотреть код проекта? Просто покажите chatgpt.
69
  <div align="center">
70
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
71
  </div>
72
 
73
- - Несколько моделей больших языковых моделей смешиваются (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/) -GPT4)
74
  <div align="center">
75
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
76
  </div>
77
 
78
- Несколько моделей больших языковых моделей смешиваются в [бета-версии huggingface] (https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (huggingface-версия не поддерживает chatglm).
79
-
80
-
81
  ---
 
 
82
 
83
- ## Установка - Метод 1: Запуск (Windows, Linux или MacOS)
84
-
85
- 1. Скачайте проект
86
  ```sh
87
  git clone https://github.com/binary-husky/chatgpt_academic.git
88
  cd chatgpt_academic
89
  ```
90
 
91
- 2. Настройка API_KEY и настройки прокси
92
 
93
- В файле `config.py` настройте зарубежный прокси и OpenAI API KEY, пояснения ниже
94
- ```
95
- 1. Если вы находитесь в Китае, вам нужно настроить зарубежный прокси, чтобы использовать OpenAI API. Пожалуйста, внимательно прочитайте config.py для получения инструкций (1. Измените USE_PROXY на True; 2. Измените прокси в соответствии с инструкциями).
96
- 2. Настройка API KEY OpenAI. Вам необходимо зарегистрироваться на сайте OpenAI и получить API KEY. После получения API KEY настройте его в файле config.py.
97
- 3. Вопросы, связанные с сетевыми проблемами (тайм-аут сети, прокси не работает), можно найти здесь: https://github.com/binary-husky/chatgpt_academic/issues/1
98
- ```
99
- (Примечание: при запуске программы будет проверяться наличие конфиденциального файла конфигурации с именем `config_private.py` и использоваться в нем конфигурация параметров, которая перезаписывает параметры с такими же именами в `config.py`. Поэтому, если вы понимаете логику чтения нашей конфигурации, мы настоятельно рекомендуем вам создать новый файл конфигурации с именем `config_private.py` рядом с `config.py` и переместить (скопировать) настройки из `config.py` в `config_private.py`. `config_private.py` не подвергается контролю git, что делает конфиденциальную информацию более безопасной.)
100
 
 
101
 
102
- 3. Установить зависимости
103
- ```sh
104
- # (Выбор 1) Рекомендуется
105
- python -m pip install -r requirements.txt
106
 
107
- # (Выбор 2) Если вы используете anaconda, то шаги будут аналогичны:
108
- # (Шаг 2.1) conda create -n gptac_venv python=3.11
109
- # (Шаг 2.2) conda activate gptac_venv
110
- # (Шаг 2.3) python -m pip install -r requirements.txt
111
 
112
- # Примечание: используйте официальный источник pip или источник pip.aliyun.com. Другие источники pip могут вызывать проблемы. временный метод замены источника:
113
- # python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
 
 
114
  ```
115
 
116
- Если требуется поддержка TUNA ChatGLM, необходимо установить дополнительные зависимости (если вы неудобны с python, необходимо иметь хорошую конфигурацию компьютера):
 
 
 
117
  ```sh
118
- python -m pip install -r request_llm/requirements_chatglm.txt
 
 
 
 
 
 
 
 
119
  ```
120
 
121
- 4. Запустите
 
 
 
 
 
122
  ```sh
123
  python main.py
 
124
  ```
125
-
126
- 5. Тестовые функции плагина
127
- ```
128
- - Тестирвоание анализа проекта Python
129
- В основной области введите `./crazy_functions/test_project/python/dqn` , а затем нажмите "Анализировать весь проект Python"
130
- - Тестирование самостоятельного чтения кода
131
- Щелкните " [Демонстрационный режим многопоточности] Проанализируйте сам проект (расшифровка источника кода)"
132
- - Тестирование функций шаблонного плагина (вы можете использовать эту функцию как шаблон для более сложных функций, требующих ответа от gpt в связи с тем, что произошло сегодня в истории)
133
- Щелкните " [Функции шаблонного плагина] День в истории"
134
- - На нижней панели дополнительные функции для выбора
135
  ```
136
 
137
- ## Установка - Метод 2: Использование docker (Linux)
138
 
 
139
 
140
- 1. Только ChatGPT (рекомендуется для большинства пользователей):
141
  ``` sh
142
- # Скачать проект
143
- git clone https://github.com/binary-husky/chatgpt_academic.git
144
- cd chatgpt_academic
145
- # Настроить прокси за границей и OpenAI API KEY
146
- Отредактируйте файл config.py в любом текстовом редакторе.
147
- # Установка
148
- docker build -t gpt-academic .
149
- # Запустить
150
- docker run --rm -it --net=host gpt-academic
151
-
152
- # Проверка функциональности плагина
153
- ## Проверка шаблонной функции плагина (требуется, чтобы gpt ответил, что произошло "в истории на этот день"), вы можете использовать эту функцию в качестве шаблона для реализации более сложных функций.
154
- Нажмите "[Шаблонный демонстрационный плагин] История на этот день".
155
- ## Тест абстрактного резюме для проекта на Latex
156
- В области ввода введите ./crazy_functions/test_project/latex/attention, а затем нажмите "Чтение реферата о тезисах статьи на LaTeX".
157
- ## Тестовый анализ проекта на Python
158
- Введите в область ввода ./crazy_functions/test_project/python/dqn, затем нажмите "Проанализировать весь проект на Python".
159
 
160
- Выбирайте больше функциональных плагинов в ��ижнем выпадающем меню.
 
 
 
161
  ```
162
 
163
- 2. ChatGPT + ChatGLM (требуется глубокое знание Docker и достаточно мощное компьютерное оборудование):
164
 
165
  ``` sh
166
- # Изменение Dockerfile
167
- cd docs && nano Dockerfile+ChatGLM
168
- # Как построить | Как запустить (Dockerfile+ChatGLM в пути docs, сначала перейдите в папку с помощью cd docs)
169
- docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
170
- # Как запустить | Как запустить (2) я хочу войти в контейнер и сделать какие-то настройки до запуска:
171
- docker run --rm -it --net=host --gpus=all gpt-academic bash
172
  ```
173
 
 
 
 
 
 
174
 
175
- ## Установка-Метод 3: Другие способы развертывания
176
 
177
- 1. Развертывание на удаленном облачном сервере
178
- Пожалуйста, посетите [Deploy Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
179
 
180
- 2. Использование WSL2 (Windows Subsystem for Linux)
181
- Пожалуйста, посетите [Deploy Wiki-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
182
 
 
 
183
 
184
- ## Установка-Настройки прокси
185
- ### Метод 1: Обычный способ
186
- [Конфигурация прокси] (https://github.com/binary-husky/chatgpt_academic/issues/1)
187
 
188
- ### Метод 2: Руководство новичка
189
- [Руководство новичка] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
190
 
 
 
191
 
192
  ---
 
 
193
 
194
- ## Настройка новой удобной кнопки (настройка быстрой клавиши для научной работы)
195
- Откройте `core_functional.py` любым текстовым редактором, добавьте элементы, как показано ниже, затем перезапустите программу. (Если кнопка уже успешно добавлена и видна, то префикс и суффикс поддерживают горячее изменение, чтобы они оказались в действии, не нужно перезапускать программу.)
196
- например
197
  ```
198
- "Супер анг-рус": {
199
- # Префикс, будет добавлен перед вашим вводом. Например, используется ��ля описания ваших потребностей, таких как перевод, кодинг, редактирование и т. д.
200
- "Prefix": "Пожалуйста, переведите этот фрагмент на русский язык, а затем создайте пошаговую таблицу в markdown, чтобы объяснить все специализированные термины, которые встречаются в тексте:\n\n",
201
 
202
- # Суффикс, будет добавлен после вашего ввода. Например, совместно с префиксом можно обрамить ваш ввод в кавычки.
203
  "Suffix": "",
204
  },
205
  ```
@@ -207,85 +200,79 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
207
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
208
  </div>
209
 
210
- ---
211
 
 
 
 
212
 
213
- ## Демонстрация некоторых возможностей
 
 
214
 
215
- ### Отображение изображений:
216
 
217
- <div align="center">
218
- <img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
219
- </div>
220
 
 
221
 
222
- ### Если программа может понимать и разбирать сама себя:
223
 
224
- <div align="center">
225
- <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
226
- </div>
227
 
228
- <div align="center">
229
- <img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
230
- </div>
231
 
 
232
 
233
- ### Анализ других проектов на Python/Cpp:
234
- <div align="center">
235
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
236
- </div>
237
 
238
- <div align="center">
239
- <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
240
- </div>
241
 
242
- ### Генерация понимания и абстрактов с помощью Латех статей в один клик
243
- <div align="center">
244
- <img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
245
- </div>
 
 
 
 
 
 
 
 
 
 
 
246
 
247
- ### Автоматическое создание отчетов
248
- <div align="center">
249
- <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
250
- <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
251
- <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
252
- </div>
253
 
254
- ### Модульный дизайн функций
255
- <div align="center">
256
- <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
257
- <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
258
- </div>
259
 
 
260
 
261
- ### Трансляция исходного кода на английский язык
 
262
 
263
- <div align="center">
264
- <img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
265
- </div>
266
 
267
- ## Todo и планирование версий:
268
- - version 3.2+ (todo): функция плагины поддерживают более многочисленные интерфейсы параметров
269
- - version 3.1: поддержка одновременного опроса нескольких моделей gpt! Поддержка api2d, поддержка балансировки нагрузки множества apikey.
270
- - version 3.0: поддержка chatglm и других маленьких llm
271
- - version 2.6: реструктурировал структуру плагинов, повысил интерактивность, добавил больше плаги��ов
272
- - version 2.5: само обновление, решение проблемы слишком длинного текста и переполнения токена при переводе всего проекта исходного кода
273
- - version 2.4: (1) добавлена функция перевода всего PDF-документа; (2) добавлена функция изменения положения входной области; (3) добавлена опция вертикального макета; (4) оптимизация функций многопоточности плагина.
274
- - version 2.3: улучшение многопоточной интерактивности
275
- - version 2.2: функция плагинов поддерживает горячую перезагрузку
276
- - version 2.1: блочная раскладка
277
- - version 2.0: модульный дизайн функций плагина
278
- - version 1.0: основные функции
279
-
280
- ## Ссылки на изучение и обучение
281
 
282
- ```
283
- В коде использовано много хороших дизайнерских решений из других отличных проектов, в том числе:
284
 
285
- # Project1: использование многих приемов из ChuanhuChatGPT
286
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
287
 
288
- # Project2: ChatGLM-6B в Тхуде:
289
- https://github.com/THUDM/ChatGLM-6B
290
- ```
291
 
 
 
 
 
 
2
  >
3
  > Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным.
4
  >
5
+ # <img src="logo.png" width="40" > GPT Академическая оптимизация (GPT Academic)
6
 
7
+ **Если вам нравится этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные языковые ярлыки или функциональные плагины, не стесняйтесь открывать issue или pull request.
8
+ Чтобы перевести этот проект на произвольный язык с помощью GPT, ознакомьтесь и запустите [`multi_language.py`](multi_language.py) (экспериментальный).
9
 
10
+ > **Примечание**
11
+ >
12
+ > 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов!
13
+ >
14
+ > 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). С каждой итерацией выполнения версии вы можете в любое время вызвать повторное создание отчета о самоанализе этого проекта, щелкнув соответствующий функциональный плагин и вызвав GPT. Вопросы сборки описаны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Метод установки](#installation).
15
+ >
16
+ > 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу.
17
 
18
  > **Примечание**
19
  >
20
+ > При установке зависимостей строго выбирайте версии, **указанные в файле requirements.txt**.
21
+ >
22
+ > `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`## Задание
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ Вы профессиональный переводчик научных статей.
 
 
 
 
 
25
 
26
+ Переведите этот файл в формате Markdown на русский язык. Не изменяйте существующие команды Markdown, ответьте только переведенными результатами.
27
 
28
+ ## Результат
29
 
30
+ Функция | Описание
31
+ --- | ---
32
+ Однокнопочный стиль | Поддержка однокнопочного стиля и поиска грамматических ошибок в научных статьях
33
+ Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский
34
+ Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода
35
+ [Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш
36
+ Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/chatgpt_academic/wiki/Function-Plug-in-Guide)
37
+ [Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academicProject-Self-analysis-Report) исходного кода этого проекта
38
+ [Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева других проектов Python/C/C++/Java/Lua/...
39
+ Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме
40
+ Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи
41
+ Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций
42
+ [Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) для этих 5 языков?
43
+ Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение
44
+ Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность)
45
+ [Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF
46
+ [Google Scholar Integration Helper](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] При заданном любом URL страницы поиска в Google Scholar позвольте gpt вам помочь [написать обзор](https://www.bilibili.com/video/BV1GP411U7Az/)
47
+ Сбор Интернет-информации + GPT | [Функциональный плагин] Однокнопочный [запрос информации из Интернета GPT](https://www.bilibili.com/video/BV1om4y127ck), затем ответьте на вопрос, чтобы информация не устарела никогда
48
+ Отображение формул / изображений / таблиц | Может одновременно отображать формулы в [формате Tex и рендеринге](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), поддерживает формулы, подсвечивает код
49
+ Поддержка функций с многопоточностью | Поддержка многопоточного вызова chatgpt, однокнопочная обработка [больших объемов текста](https://www.bilibili.com/video/BV1FT411H7c5/) или программ
50
+ Темная тема gradio для запуска приложений | Добавьте ```/?__theme=dark``` после URL в браузере, чтобы переключиться на темную тему
51
+ [Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Они одновременно обслуживаются GPT3.5, GPT4, [Clear ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)
52
+ Подключение нескольких новых моделей LLM, поддержка деплоя[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Подключение интерфейса Newbing (новый Bing), подключение поддержки [LLaMA](https://github.com/facebookresearch/llama), поддержка [RWKV](https://github.com/BlinkDL/ChatRWKV) и [Pangu α](https://openi.org.cn/pangu/)
53
+ Больше новых функций (генерация изображения и т. д.) | См. на конце этого файла…- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to liberate the clipboard
54
  <div align="center">
55
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
56
  </div>
57
 
58
+ - Revision/Correction
59
  <div align="center">
60
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
61
  </div>
62
 
63
+ - If the output contains formulas, they will be displayed in both tex and rendered form for easy copying and reading
64
  <div align="center">
65
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
66
  </div>
67
 
68
+ - Don't feel like looking at project code? Show the entire project directly in chatgpt
69
  <div align="center">
70
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
71
  </div>
72
 
73
+ - Mixing multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4
74
  <div align="center">
75
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
76
  </div>
77
 
 
 
 
78
  ---
79
+ # Installation
80
+ ## Installation-Method 1: Run directly (Windows, Linux or MacOS)
81
 
82
+ 1. Download the project
 
 
83
  ```sh
84
  git clone https://github.com/binary-husky/chatgpt_academic.git
85
  cd chatgpt_academic
86
  ```
87
 
88
+ 2. Configure API_KEY
89
 
90
+ In `config.py`, configure API KEY and other settings, [special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
 
 
 
 
 
 
91
 
92
+ (P.S. When the program is running, it will first check whether there is a secret configuration file named `config_private.py` and use the configuration in it to replace the same name in` config.py`. Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Priority of read: `environment variable`>`config_private.py`>`config.py`)
93
 
 
 
 
 
94
 
95
+ 3. Install dependencies
96
+ ```sh
97
+ # (Option I: If familiar with Python)(Python version 3.9 or above, the newer the better), note: use the official pip source or the aliyun pip source, temporary switching source method: python -m pip install -r requirements.txt - i https://mirrors.aliyun.com/pypi/simple/
98
+ python -m pip install -r requirements.txt
99
 
100
+ # (Option II: If unfamiliar with Python)Use Anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr):
101
+ conda create -n gptac_venv python=3.11 # create an Anaconda environment
102
+ conda activate gptac_venv # activate Anaconda environment
103
+ python -m pip install -r requirements.txt # This step is the same as the pip installation
104
  ```
105
 
106
+ <details><summary> If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, click here to expand </summary>
107
+ <p>
108
+
109
+ [Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, you need to install more dependencies (prerequisites: familiar with Python + have used Pytorch + computer configuration is strong):
110
  ```sh
111
+ # [Optional step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM note: If you encounter the "Call ChatGLM fail cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installation above is torch+cpu version, and cuda is used Need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) Modify to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
112
+ python -m pip install -r request_llm/requirements_chatglm.txt
113
+
114
+ # [Optional step II] Support Fudan MOSS
115
+ python -m pip install -r request_llm/requirements_moss.txt
116
+ git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, you must be in the project root path
117
+
118
+ # [Optional step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently, all supported models are as follows (the jittorllms series currently only supports the docker solution):
119
+ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
120
  ```
121
 
122
+ </p>
123
+ </details>
124
+
125
+
126
+
127
+ 4. Run
128
  ```sh
129
  python main.py
130
+ ```5. Testing Function Plugin
131
  ```
132
+ - Testing function plugin template function (requires GPT to answer what happened in history today), you can use this function as a template to implement more complex functions
133
+ Click "[Function plugin Template Demo] On this day in history"
 
 
 
 
 
 
 
 
134
  ```
135
 
136
+ ## Installation - Method 2: Using Docker
137
 
138
+ 1. ChatGPT only (recommended for most people)
139
 
 
140
  ``` sh
141
+ git clone https://github.com/binary-husky/chatgpt_academic.git # download the project
142
+ cd chatgpt_academic # enter the path
143
+ nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923)
144
+ docker build -t gpt-academic . # install
 
 
 
 
 
 
 
 
 
 
 
 
 
145
 
146
+ # (Last step-Option 1) In a Linux environment, using `--net=host` is more convenient and faster
147
+ docker run --rm -it --net=host gpt-academic
148
+ # (Last step-Option 2) In macOS/windows environment, only -p option can be used to expose the port on the container (eg 50923) to the port on the host
149
+ docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
150
  ```
151
 
152
+ 2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker)
153
 
154
  ``` sh
155
+ # Edit docker-compose.yml, delete solutions 1 and 3, and keep solution 2. Modify the configuration of solution 2 in docker-compose.yml, refer to the comments in it
156
+ docker-compose up
 
 
 
 
157
  ```
158
 
159
+ 3. ChatGPT + LLAMA + PanGu + RWKV (requires familiarity with Docker)
160
+ ``` sh
161
+ # Edit docker-compose.yml, delete solutions 1 and 2, and keep solution 3. Modify the configuration of solution 3 in docker-compose.yml, refer to the comments in it
162
+ docker-compose up
163
+ ```
164
 
 
165
 
166
+ ## Installation Method 3: Other Deployment Methods
 
167
 
168
+ 1. How to use reverse proxy URL/Microsoft Azure API
169
+ Configure API_URL_REDIRECT according to the instructions in `config.py`.
170
 
171
+ 2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers)
172
+ Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
173
 
174
+ 3. Using WSL2 (Windows Subsystem for Linux subsystem)
175
+ Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
 
176
 
177
+ 4. How to run at the secondary URL (such as `http://localhost/subpath`)
178
+ Please visit [FastAPI Operation Instructions](docs/WithFastapi.md)
179
 
180
+ 5. Using docker-compose to run
181
+ Please read docker-compose.yml and follow the prompts to operate.
182
 
183
  ---
184
+ # Advanced Usage
185
+ ## Customize new convenient buttons / custom function plugins
186
 
187
+ 1. Customize new convenient buttons (academic shortcuts)
188
+ Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, both prefixes and suffixes can be hot-modified without having to restart the program.)
189
+ For example:
190
  ```
191
+ "Super English to Chinese": {
192
+ # Prefix, will be added before your input. For example, describe your requirements, such as translation, code interpretation, polishing, etc.
193
+ "Prefix": "Please translate the following content into Chinese, and then explain each proper noun that appears in the text with a markdown table:\n\n",
194
 
195
+ # Suffix, will be added after your input. For example, with the prefix, you can enclose your input content in quotes.
196
  "Suffix": "",
197
  },
198
  ```
 
200
  <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
201
  </div>
202
 
203
+ 2. Custom function plugin
204
 
205
+ Write powerful function plugins to perform any task you can and can't imagine.
206
+ The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide.
207
+ Please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) for details.
208
 
209
+ ---
210
+ # Latest Update
211
+ ## New feature dynamic
212
 
213
+ 1. Сохранение диалогов. Вызовите "Сохранить текущий диалог" в разделе функций-плагина, чтобы сохранить текущий диалог как файл HTML, который можно прочитать и восстановить. Кроме того, вызовите «Загрузить архив истории диалога» в меню функций-плагина, чтобы восстановить предыдущую сессию. Совет: если нажать кнопку "Загрузить исторический архив диалога" без указания файла, можно просмотреть кэш исторических файлов HTML. Щелкните "Удалить все локальные записи истории диалогов", чтобы удалить все файловые кэши HTML.
214
 
215
+ 2. Создание отчетов. Большинство плагинов создают рабочий отчет после завершения выполнения.
216
+  
217
+ 3. Модульный дизайн функций, простой интерфейс, но сильный функционал.
218
 
219
+ 4. Это проект с открытым исходным кодом, который может «сам переводить себя».
220
 
221
+ 5. Перевод других проектов с открытым исходным кодом - это не проблема.
222
 
223
+ 6. Мелкие функции декорирования [live2d](https://github.com/fghrsh/live2d_demo) (по умолчанию отключены, нужно изменить `config.py`).
 
 
224
 
225
+ 7. Поддержка большой языковой модели MOSS.
 
 
226
 
227
+ 8. Генерация изображений с помощью OpenAI.
228
 
229
+ 9. Анализ и подведение итогов аудиофайлов с помощью OpenAI.
 
 
 
230
 
231
+ 10. Полный цикл проверки правописания с использованием LaTeX.
 
 
232
 
233
+ ## Версии:
234
+ - Версия 3.5 (Todo): использование естественного языка для вызова функций-плагинов проекта (высокий приоритет)
235
+ - Версия 3.4 (Todo): улучшение многопоточной поддержки локальных больших моделей чата.
236
+ - Версия 3.3: добавлена функция объединения интернет-информации.
237
+ - Версия 3.2: функции-плагины поддерживают большое количество параметров (сохранение диалогов, анализирование любого языка программирования и одновременное запрос LLM-групп).
238
+ - Версия 3.1: поддержка одновременного запроса нескольких моделей GPT! Поддержка api2d, сбалансированное распределение нагрузки по нескольким ключам api.
239
+ - Версия 3.0: поддержка chatglm и других небольших LLM.
240
+ - Версия 2.6: перестройка структуры плагинов, улучшение интерактивности, добавлено больше плагинов.
241
+ - Версия 2.5: автоматическое обновление для решения проблемы длинного текста и переполнения токенов при обработке больших проектов.
242
+ - Версия 2.4: (1) добавлена функция полного перевода PDF; (2) добавлена функция переключения положения ввода; (3) добавлена опция вертикального макета; (4) оптимизация многопоточности плагинов.
243
+ - Версия 2.3: улучшение многопоточной интерактивности.
244
+ - Версия 2.2: функции-плагины поддерживают горячую перезагрузку.
245
+ - Версия 2.1: раскрывающийся макет.
246
+ - Версия 2.0: использование модульных функций-плагинов.
247
+ - Версия 1.0: базовые функции.
248
 
249
+ gpt_academic Разработчик QQ-группы-2: 610599535
 
 
 
 
 
250
 
251
+ - Известные проблемы
252
+ - Некоторые плагины перевода в браузерах мешают работе фронтенда этого программного обеспечения
253
+ - Высокая или низкая версия gradio может вызвать множество исключений
 
 
254
 
255
+ ## Ссылки и учебные материалы
256
 
257
+ ```
258
+ Мы использовали многие концепты кода из других отличных проектов, включая:
259
 
260
+ # Проект 1: Qinghua ChatGLM-6B:
261
+ https://github.com/THUDM/ChatGLM-6B
 
262
 
263
+ # Проект 2: Qinghua JittorLLMs:
264
+ https://github.com/Jittor/JittorLLMs
 
 
 
 
 
 
 
 
 
 
 
 
265
 
266
+ # Проект 3: Edge-GPT:
267
+ https://github.com/acheong08/EdgeGPT
268
 
269
+ # Проект 4: Chuanhu ChatGPT:
270
  https://github.com/GaiZhenbiao/ChuanhuChatGPT
271
 
272
+ # Проект 5: ChatPaper:
273
+ https://github.com/kaixindelele/ChatPaper
 
274
 
275
+ # Больше:
276
+ https://github.com/gradio-app/gradio
277
+ https://github.com/fghrsh/live2d_demo
278
+ ```